Digital Transformation – What Would Jezos Do?


“Every company is a software company”

I read an article a little while ago which praised Domino's for their technology innovation, using them as a reference to support the assertion that every company is a software company; shortly after I read a barbed reaction article where the responding author vented his frustration at this type of description, pointing out that “No, Domino's are an effing Pizza company…”.

Now I hate pointless slogans as much as the next man but in this instance, I think the angry technologist was missing the point. Yes, the phrase “Every company is a software company” is rubbish but the point the original writer, along with a growing number of commentators are trying to make is that tech and business are now inextricably linked. We’re fast reaching a point where the boundary between an organisation and the technology which underpins it is impossible to define.

Maybe a better description would have been “Every company should be software like…”? Okay, that’s just replacing one crap phrase with another but bear with me… ?

We work in an industry where business jargon has reached peak expression. Newer technologies and approaches are often accompanied by phrases which mean different things to different people. Google ‘Digital Transformation’ and you’ll find a bunch of articles using phrases like “cultural shift”, “competitive differentiation”, and “reducing technical debt”; most of them read like an AI bot was given a copy of The Phoenix Project and asked to prove that it’s real boy. Don’t get me wrong, Digital Transformation is an actual thing; however, I don’t believe there are many people who have a handle on what it means, or if they do, how to approach it within their organisation. I think part of the problem is with the name; it’s got ‘digital’ in it so it’s a tech thing, right? Well no, not entirely anyway… 

Our whole world is undergoing digital transformation. If a child wants to find an answer to something they won’t go and grab an Encyclopedia off the bookshelf; if they’re anything like my kids they’ll ask Siri or Alexa the question as it’s usually the fastest and most efficient way of finding an answer. They’ve intuitively developed their own frameworks for performing day-to-day activities based on the tools available to them. I didn’t have the internet as a kid but similarly I’ll now use my phone, or if I’m sat at my desk, my laptop, as my personal frameworks have adapted over time; that said, my acquired biases (amongst other things) mean I’m more likely to type the question in a search bar than ask a digital assistant; a subtle, but noteworthy difference.

“So, it IS all about technology then?” I hear you cry. Hold your horses there cowboy, I think it’s actually about frameworks; or more specifically the definition and utilisation of technology frameworks as foundational platforms for business agility and innovation (beat that buzzword bingoists!).

I believe that this type of approach is at the core of some of the most disruptive companies in the world. Yes, many of them are tech companies whose products are fundamentally changing the human experience but what sets them apart is not just the products and services they provide, it’s also that their very fabric is built around technology. In my view this is what digital transformation is all about. Best practices have always been about things like repeatability and scalability, what’s new are the tools and technologies now available to us; and a framework approach to leveraging these tools can help create new opportunities for improvement throughout our businesses. One of the most illustrative examples of this type of transformation is Amazon.

Jeff's Big Mandate

A few years ago, a Google employee posted (on the now-defunct Google+) a now famous rant which laid out his frustrations that Google lacked a platform approach to their services. He was an ex Amazon employee and within his post he referenced a memo which Jeff Bezos had sent out to the whole company in 2002 (give or take a year either side); His ‘Big Mandate’ went something along the lines of:

  1. All teams will henceforth expose their data and functionality through service interfaces
  2. Teams must communicate with each other through these interfaces
  3. There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network
  4. It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter. Bezos doesn't care
  5. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
  6. Anyone who doesn't do this will be fired

Bezos tasked a couple of bulldogs with enforcing this policy and over the next few years, Amazon transformed internally into a service-oriented architecture (SOA). The scale of this task can’t be overstated. At the time there was lots of existing documentation and lore about SOAs, but at Amazon's vast scale most of it didn’t really apply.

But they did it, and gradually Amazon transformed culturally into a company that thinks about everything in a services-first fashion. It is now fundamental to how they approach all designs, including internal stuff that might never see the light of day externally. A side-benefit* of making all interfaces externally presentable meant the infrastructure they'd built for selling and shipping books (and nowadays pretty much everything else) could itself be leveraged and sold as an Infrastructure-as-a-Service platform i.e. Amazon Web Services (AWS).

*a side-benefit which at the time of writing makes up more than half of Amazon’s overall operating income.

With this type of approach Amazon standardised the way systems, departments and business units communicate and ‘interface’ with each other. In ‘legacy’ architectures interfaces are typically set up on a case-by-case, point-to-point basis; this is often complex, inefficient (both from a time and resource perspective) and inflexible. Amazon’s approach on the other hand is quite the opposite; a service interface is created, exposed and where required consumed. Techniques such as role-based access control, network access control lists etc. are applied to enforce a policy-defined security posture. Once a service has been created, providing access to a new system/user is often as simple as creating a new account, or adding a new role to an existing account.

The architecture Amazon have built looks a hell of a lot like a modern micro-services/container-based application to me e.g. Amazon as a company are themselves software like…

Google vs Yahoo

I find it somewhat ironic that Mr Ex-Amazon’s beef was with Google. Okay, Google+ may have been pants but in many ways, Google are another benchmark for a platform-led approach to solving problems and delivering services at massive scale.

At around the same time as Jeff B sent out his mandate, Google were readying to rip up a different rulebook; this was way back, when Yahoo were very much the 800-pound gorilla in the search space, with Google still a pre-IPO startup.

Meeting the requirements of the internet economy meant solving a raft of infrastructure problems at an unprecedented scale. Central to this was how to store, manage and use a massive and exponentially growing quantity of unstructured data.

The two companies took very different approaches to solving the same problem. Yahoo’s infrastructure was built around NetApp filers; purpose-built storage appliances which enabled them to stand-up racks of predictable, performant compute infrastructure at an impressive pace.

In contrast Google didn’t rely on someone else’s turn-key solutions; they doubled down on the software-defined infrastructure strategy which they’d begun in their Stanford infancy. An enormous engineering team spent the next four years developing, and productionising what came to be called the Google File System (GFS). GFS created a resilient and massively scalable infrastructure fabric from commodity off-the-shelf (COTS) servers, consolidating compute and storage into a foundational platform designed from the ground up to support a wide range of web-scale applications.

The sheer determination and strength of will this must have taken is difficult to imagine; four years is a long time in any industry, let alone the internet. During this period Yahoo continued to buy (and build on) NetApp filers in lockstep with demand and on the face of it, significantly extend their lead in the race to dominate the World Wide Web. It wasn’t all rosy in Yahoo’s garden however; the piper came a-calling, and he wanted his money…

NetApp’s are, and always have been premium products. Whichever way you slice it a shedload of filers are going to cost a shedload of cash to procure and maintain but the issues run much deeper than that. Each individual filer was built on a pair of monolithic array controllers. For any non-techies out there, this means that controllers pool and present directly attached storage; clients consume the presented storage via those same controllers. At Yahoo’s scale this translated to many discrete systems which needed feeding and watering. The costs and problems weren’t only operational though, as demand expanded and diversified the appliance-based infrastructure grew in complexity and inefficiency; when Yahoo added a new use-case the infrastructure needed to be re-architected. Individual services such as Yahoo Search and Yahoo Mail sat on dedicated infrastructure, each of which contained multiple storage and compute silos. As a result, an individual problem had to be solved numerous times across multiple infrastructures.

Compare this with the Google File System which was built specifically to address these types of challenges. The same infrastructure was used for all Google application services such as Search, Images, Maps etc. When they bought YouTube, they performed some upgrades on the underlying architecture, rolled them out everywhere and then implemented the video-sharing service on the same platform, alongside everything else.

Finally (and this bit’s key), the GFS architecture enabled compute power to be shared across all services; for example, when servers weren’t busy on search requests, they could process email, or perform a necessary but lower priority housekeeping task.

In summary, GFS enabled Google to deliver more services, faster, with fewer engineers, less electrical power, in a smaller physical footprint, on cheaper hardware. That’s some competitive advantage.

Innovation: Disrupt or be disrupted; if you can…?

I talk a lot about disruption and how the most disruptive technologies typically improve upon the status quo in one or more of three key areas; simplicity, flexibility, and efficiency. In the context of this article it may help to explore some of the nuances of innovation in a little more depth.

This section originally began with some commentary on Clayton Christensen’s 1997 “classic” The Innovator’s Dilemma, including how it provides an insightful exploration of the failure of incumbent companies to stay at the top of their industries when technologies and/or markets change.

The problem is that the more I thought about it, cross-referenced and researched, the more I came to the conclusion that it was utter b******s. Maybe that’s a bit harsh, and I should caveat by saying I am a natural sceptic who is particularly sensitive to narrative fallacy, hindsight bias and confirmation bias. I should also add that the book received a raft of stellar reviews and is considered one of the most important and influential business books of the modern era, so definitely don’t take my word for it.

Anyway, something I’ve personally found much more useful is the work of Rebecca Henderson, starting with a paper she published in 1990 with her supervisor Kim Clark: Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms. The paper expands beyond the traditional categorisations of innovation ‘radical’ and ‘incremental’ and introduces ‘modular’ and ‘architectural’ innovation; it’s the last one which is particularly tricksy. I won’t regurgitate the entire paper here (I would strongly recommend reading it) but figure 1 below is a useful representation of the concept:

No alt text provided for this image

The horizontal dimension captures an innovations impact on components, the vertical its impact on the linkage between components.

“The essence of an architectural innovation is the reconfiguration of an established system to link together existing components in a new way. This does not mean that the components themselves are untouched by architectural innovation. Architectural innovation is often triggered by a change in a component-perhaps size or some other subsidiary parameter of its design-that creates new interactions and new linkages with other components in the established product. The important point is that the core design concept behind each component-and the associated scientific and engineering knowledge-remain the same.”

Henderson and Clark make clear in their paper that the categories defined in the innovation matrix aren’t concrete; a given innovation could, for example, be somewhat radical but more architectural. Also, the assumptive approximation is that “organisations are boundedly rational and, hence, that their knowledge and information-processing structure come to mirror the internal structure of the product they are designing.” I’m going to go a step further and say that Architectural Innovation can affect and apply to products, systems, services and organisations, either independently or in combination.

A good example of Architectural Innovation is the Sony Walkman. The core components for the Walkman already existed; Sony identified a market opportunity and pieced together existing components in a different way than had been done previously. Subsequent iterations added incremental improvements such as auto-reverse and anti-rolling functionality. The evolution of the Walkman into the Discman represented a modular innovation, whereby a core design concept was changed (audio cassette was swapped for CD) but the fundamental architecture was preserved.

Interestingly Sony themselves got caught out by a subsequent Architectural Innovation. The memory stick Walkman actually predates the iPod by a couple of years, but Sony made the mistake of treating the change from audio CDs to music stored as digital files as just another modular innovation. A portable music player ‘system’ includes the music and storage media, and tapes and CDs are for all intents and purposes identical in the way that they are bought and stored. Apple recognised that the move to digital files represented a paradigm shift in the way music is consumed, and they architected a system to address that; they bundled the iPod with iTunes, offering an elegant solution which made the purchase and management of music radically more simple, flexible and efficient.

Once it became clear that the iPod/digital-music-system was something new a company like Sony should, on-paper still have been in a great position to react appropriately; among their various departments (audio-visual, Playstation etc.) they had significant, related experience and expertise plus a strong brand/image. The problem was their capabilities needed to be combined in new ways, and the departments had, as is often the case become more and more siloed over time. In contrast Apple offered a handful of products and so could be much more focused and agile in the way they approached the task at hand; this, combined with their crystal-clear understanding of both the problem and the market enabled them to disrupt a whole industry. For their next trick it could be argued that with the iPhone they disrupted the entire world.

Problems, Shortcuts and Biases

When a problem is considered afresh it is much easier to define and understand. Finding opportunities for incremental improvements and optimisations is relatively straightforward; however, if a solution requires architectural innovation it is often anything but simple. Kahneman and Tversky’s work on cognitive reasoning, covered in Kahneman’s 2011 book ‘Thinking Fast and Slow’ gives some great insight into why that might be. The book brilliantly defines and explores two contrasting systems of reasoning; System 1 “is the brain’s fast, automatic, intuitive approach” and System 2 “the mind’s slower, analytical mode, where reason dominates.”

System 1 utilises heuristics; cognitive shortcuts which simplify decision making and problem solving. System 2 is more deliberate and rational; however, it’s also lazy and won’t even show up if it can help it.

Our minds are continually and dynamically prioritising and filtering information via various mechanisms such as anchoring, association, analysis of effort, affect, familiarity, representativeness and more. We naturally jump to conclusions and believe WYSIATI (what you see is all there is). This is a necessary process for which there was a strong evolutionary imperative. For a ‘new’ problem where a great deal is unfamiliar, System 2 is engaged and we’re able to avoid many of the pitfalls these shortcuts can create. As I say though, filtering is continuous and once we’ve understood a problem and settled on a solution it can be incredibly challenging to use a different architectural lens than the one we now believe to be optimal; if a subsequent problem or requirement is too similar then we have great difficulty seeing the wood from the trees. Where the next phase of development necessitates an architectural innovation, almost without exception the breakthrough is made by somebody else.

The journey organisations go through is strikingly similar. At its inception an organisation is typically created to support a specific product or service offering; as mentioned earlier, the organisational structure often mirrors that of the offering itself. Over time the organisation may grow, additional offerings could be added to the portfolio and in support of this evolution the business architecture will change and adapt over time; it will naturally become more efficient, with more and more crisply defined communication channels, ‘noise’ filters and both written and unwritten best practices defined. This is analogous to the cognitive shortcuts implemented by System 1.

What often happens is that a new problem or innovation challenges an incumbent’s ‘optimised’ organisational architecture and it can be incredibly difficult for them to respond appropriately, leaving them exposed (often catastrophically) to disruption. It wasn’t that companies like Kodak were caught off guard per se, in many instances they saw the future coming better than anyone; they simply weren’t able to do anything about it.


When you consider the successes of serial innovators, people like Mohit Aron, Avinash Lakshman, Cisco’s MarioPremLuca (MPL, three quarters of the legendary “MPLS”) who have shown time and again that they are able to understand problems and come up with unique, game changing solutions, a pattern starts to emerge. They were able to leverage their wealth of experience and expertise to attack problems which were sufficiently different than those they’ve focused on before, where they were supported rather than stifled by the organisation they worked in at the time. Often this was within a new Silicon Valley start-up but in MPL and Cisco’s case this came with a twist.

Cisco made their very first acquisition in 1993, Crescendo Communications. Along with the technology which became the foundation of the phenomenally successful Catalyst range of Ethernet switches, Cisco also acquired a number of very notable engineers including Mario Mazzola, Prem Jain and Luca Cafiero who came to affectionately be known as MarioPremLuca. From then ‘til now Cisco have continued to aggressively acquire companies, over two-hundred to date (it’s fair to say some have been more successful than others); three of these acquisitions were “spin-ins” which is a model many observers are still attempting to wrap their heads around.

In 2001 Soni Jiandini, a technical marketing engineer added an S to create MPLS (a play on words you can guarantee they were chuffed to bits with) and the team went off to found the start-up Andiamo Systems. I say “went off” but this is where things get a bit weird. Andiamo was wholly funded by Cisco to the tune of $184 million. Yes, you did read that correctly, a team of Cisco’s best people left to start a new company, with Cisco’s blessing and a wad of Cisco’s cash.

Andiamo’s goal was to create a create a new Storage Area Network (SAN) switch and apparently, they’d been so successful in their task that Cisco agreed to buy them just one year later. The deal took another couple of years to close and in 2004 Cisco completed the acquisition of Andiamo Systems for $750 million in Cisco stock and MPLS and the rest of the team again became Cisco employees (and were significantly wealthier for their part in the ‘experiment’).

In 2006 MPLS and Cisco did it again with $70mil of initial investment and bish, bash, bosh a couple of years later Cisco acquired Nuova Systems for $678mil, nearly ten times their initial funding amount. Nuova Systems developed technology would become the Unified Computing System (UCS), Cisco’s entry into the server market which has grown into a triple billion-dollar business unit.

Last but not least the Cisco superstars founded Insieme Networks in 2012; with $135mil of Daddy’s money they created the Application Centric Infrastructure (ACI) and a new range of switches which came to be the Nexus 9000. Cisco acquired Insieme towards the end of 2013 for $863mil.

If you’ve not heard this story before you’re perfectly entitled to a massive WITAF moment; it’s personally taken me years to get my head around. Why t.. f… didn’t Cisco just develop these solutions in-house; it’s Cisco Systems for f…. s…?! Surely the unorthodox approach did little more than line the pockets of the members of the yo-yo spin-out teams?*

*The teams were obviously much bigger than just MPLS. Can you imagine sitting in the cubicle next to someone who got picked for a spin-in team when you didn’t, only for them to re-appear a couple of years later as a millionaire whilst your fortunes have remained unchanged…?

As mental as Cisco spin-ins may initially sound it’s difficult to argue with the results. Many of the big tech companies most successful innovations come via outside technology acquisition. Spin-in’s are simply a tailored form of acquisition with many of the unknowns and therefore risks removed.

I believe one of the main reasons Cisco experimented with this model is that they understood the difficulties surrounding innovation. By spinning-out MPL they unencumbered them from the restrictions of Cisco’s organisational structure, allowing them the freedom they needed to create; once they’d taken things to a certain point, they span them back in again.

Now as clever as this was, I’m certainly not suggesting this is what other organisations should attempt to do. Both Cisco’s visionary CEO John Chambers and MPLS have long since left the company; I’m not even sure Cisco could pull it off again even if they wanted to.


Writing this I can hear the echo of one of my old CEO’s as she scolded a Sales Manager “Stop coming to me with problems XXXXX and start coming to me with solutions…!”

It’s all too easy to pick fault and criticise, but what can we do about it? Despite them talking a good game I haven’t come across many examples of Digital-Transformation (nee Management) Consultancies leading companies through successful transformation programmes. I think many of the digital transformers struggle to appreciate the task at hand as much as their clients; I’m of the opinion that there’s a great deal to be learnt from the type of companies this article started with.

From where I’m sitting it looks like, Inc. could disrupt just about any industry they set their sights on. I believe one of the secrets to this is the platform-based architecture their business is now built upon. Each department, business-unit, project and everything in-between leverages the core component services as an overlay; overlay’s can be spun-up and torn-down as required; internal and external communication is standardised and quick to establish. The organisational architecture becomes specific to the business function it’s servicing.

Amazon, along with Google, Facebook and a handful of others have built the impossible, foundational hyperscale platforms which enable them to be both efficient AND agile; it's no coincidence that they are amongst the largest and most successful companies in the world.

A Platform for Innovation

For a multitude of reasons it’s probably not a good idea to try and precisely imitate Amazon, or to build our own infrastructure platforms from scratch; we do however have a wealth of options to abstract away complexity, automate the repeatable and more directly meet the needs of our organisations through intelligent use of technology. We can move to policy-defined architectures which prioritise integration and communication, with point to multi-point interfaces which have the potential to unlock the power of our data both now, and in the future. We can create foundational platforms enabling innovation and agility.

To be clear this doesn’t automatically mean public cloud; this can be achieved with on-premises, hybrid, multi or pure-cloud architectures. Public cloud services like AWS, Azure and GCP are awesome for certain use-cases, but a poor choice for others. As always, the key is to understand the requirements along with the art of the possible, and to design a solution accordingly.

P.S. Be sure to keep an eye out for Architectural Innovations which offer greater simplicity, flexibility and efficiency than what's currently available; this is often where the magic happens ?‍♂️.

DataCore are Three Feet High and Rising!
Ethos Technology Adds Leading Software-Defined Sto...

Related Posts