My partner in crime on the Overcast podcast, James Urquhart, published a nice blog post today titled: Cloud computing and 'commodity'. Read the post and come back. I'll wait.
James seems to be responding to what is apparently some kind of controversy, which I don't understand because there is truly nothing new under the sun. Read Clayton Christensen's 300-page book Seeing What's Next and come back. I'll wait.
All products are in a constant commoditization process always, or a "race to the bottom" as James refers to it. Period. Therefore, in order to maintain differentiation smart vendors continuously innovate and add additional features and "crust" capabilities, which over time will themselves be commoditized. Rinse and repeat. Therefore, in economic theory, all industries are destined to become commodity industries (known as the Industry Lifecycle).
Let's see how this plays out in cloud computing (intentionally simplified greatly to make a point):
- Amazon is first mover with server and storage API provisioning on-demand and pay-per-use pricing
- Everyone and their brother offers the same at similar prices
- Some differentiation in API capabilities is created (could be richness, coverage of corner cases, ease-of-use, features such as auto-scaling, etc.)
- A formal API standard emerges and everyone again offers exactly the same
- Rackspace differentiates with better SLAs (Rackspace: The Avis of Cloud Computing?)
- Everyone matches their SLAs
- VMWare and Microsoft start offering higher-level components moving towards a platform-as-a-service
- Salesforce.com differentiates its PaaS by building a large ecosystem around its platform
- Amazon acquires several start-ups and offers the same...
You get the point.
At some point the market reaches an equilibrium. Meaning that there is no longer room to innovate in that particular product category -- because customers are no longer interested in better, faster, more options, etc. We are many-many years away from that happening in cloud computing in general, although not so far away in the sub-category of infrastructure-as-a-service.
To demonstrate that there is nothing new here, take a look at the Java application server market. In the late-nineties, the product category was created by a small company called Weblogic. Two other competitors emerged: NetDynamics and Kiva. Initially, they each had wildly different capabilities and features, but over time they converged to what the market needed. Then all three were acquired at about the same time (Weblogic by BEA, NetDynamics by Sun and Kiva by Netscape). The two latter acquirers screwed things up and BEA emerged victorious, only to face a new competitor in the form of IBM with WebSphere.
At some point the J2EE standard was introduced and both large players -- IBM and BEA -- complied with it, as well as many other smaller ones. Given that they were all working on the same standard, each of the vendors started adding features that were outside the standard. Many of these features were not needed by customers and so did not really create any differentiation. This opened the perfect opportunity for a low-cost commodity player to come in: Enter Apache Tomcat. The rest is history. There is barely any commercial J2EE app server market anymore (the majority of the revenues are related to services and support, not new licenses). It took about 10 years, maybe 12.
The same process occurred with relational databases and the SQL standard (enter MySQL) and many other product categories.
Here's an even more extreme example from the shipping business as I wrote it in the blog post Beware of Premature Elaboration (of Cloud Standards):
Consider the case of the shipping container as described in the excellent book The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger. Before standardized shipping containers, sending a shipment from, say, Kansas City to Tokyo was extremely expensive and inefficient. There was no way to send the shipment in one container as a whole. Trains used containers of certain shapes and sizes, trucks used different ones and ships used different ones. In fact American ships and trains used different containers from European and Asian ones. Sometimes, even within the U.S. or Europe, different kinds of containers were used. That also meant they were extremely difficult to stack and secure, and the goods being transported had to be moved multiple times to different containers. The effect on international (or even cross-region) shipments was chilling. Furthermore, many companies in the shipping business had to maintain infrastructure and equipment to serve multiple proprietary container specs.
By standardizing on uniform container specifications, the industry removed the barriers to long-distance shipping and greatly increased the demand for it, while reducing their own costs. A boon to everyone involved.
What I didn't mention in that blog post is that those differences in containers were not (always) accidental. They were intentional because the design, material, size, shape of the container was considered a competitive advantage.
Interestingly, today shipping companies freely use each other's containers as they move them around. In fact, there is one big pool of boxes and it doesn't matter who's box you use.
Going back to a different kind of "box" -- the virtual server -- one can easily imagine how one cloud provider, if it requires additional capacity, will use another provider's boxes. And it won't matter one bit because the real value will be in the management and monitoring capabilities and other ancillary services, the internal operational efficiencies of the provider, the SLAs, the level of customer service, the brand and the ecosystem around that provider's offerings.
Also see my Purpose-Driven Cloud series for other differentiators.
If you want to learn more about these basic industry dynamics, read any of Michael Porter's books that discuss his Five Forces model. Go ahead, I'll wait.