Service-oriented architecture is becoming one of the latest examples of the Law of Unintended Consequences. In a recent InformationWeek survey, 24% of
respondents from large corporations said their SOA projects “fell short of
expectations.” Of these, 55% said the reason for failure was that the SOA
initiative “introduced more complexity into their IT environment” -- ironic , given that one of the benefits touted by SOA proponents is that it reduces complexity.
There are several reasons for this complexity, and one of them has to do with capacity planning. Eric Roch and Jason Bloomberg have both written thoughtful pieces that discuss this issue, but generally it did not receive much attention.
Capacity planning challenges are not unique to service-oriented environments. They exist with many applications. The typical approach to address these challenges has been over-provisioning. In other words, estimating peak loads expected (usually, for the next few years), and providing enough hardware resources to be able to handle those loads. That works well, if you like throwing away money on idle hardware, electricity, cooling and data center real-estate costs. So the industry has ended up with a pathetic 5% to 15% utilization figure (the lower number typically occuring when you count people's idle desktop computers).
With SOAs, the problem is exacerbated. Services are publicly available (either within the organization or externally). This means that the designer of the service has even less visibility into the potential demand for the service down the line.
Q: So what's the best way to do service capacity planning?
A: Don't.
Eliminate the need to plan by creating a service infrastructure that dynamically grows and shrinks on demand. This is the notion of the Service Grid we introduced in GigaSpaces Enterprise Edition 5.0.
The Service Grid creates a self-managed, dynamic “cloud” of services. Requests are sent to the cloud, and it determines, based on demand and Service Level Agreements, if services need to grow or shrink their capacity (i.e., dynamically deploy on additional resources) The Service Grid is also self-healing, meaning it will dynamically maintain the high-availability policies you define.
The approach is to assume the Law of Unintended Consequences, not to ignore it.
You can learn more about the Service Grid here. There's lots more to it than I described above. As an acknowledgment: The Service Grid was originally developed by Dennis Reedy, the "father of Rio," who essentially enhanced the open source Rio Project he originally developed at Sun Microsystems, and tightly integrated it with GigaSpaces.
Larry Mitchell and Kevin Hartig are even blogging about Rio.
Recent Comments