James Urquhart and I recorded a new Overcast episode in which we discuss the recent events surrounding the release of the Open Cloud Manifesto, or as James called it, Manifestogate.
Listen to it here.
« February 2009 | Main | May 2009 »
James Urquhart and I recorded a new Overcast episode in which we discuss the recent events surrounding the release of the Open Cloud Manifesto, or as James called it, Manifestogate.
Listen to it here.
Posted on March 30, 2009 at 09:01 PM | Permalink | Comments (1) | TrackBack (0)
| |
If you're following cloud computing, you couldn't have missed the discussion about the "Open Cloud Manifesto" initiated by Reuven Cohen and reportedly supported by IBM, Sun and several other large and smaller vendors. The noise became a fevered pitch when Steven Martin of Microsoft wrote a scathing blog post about the Manifesto, essentially claiming that it was made to support the interests of certain large vendors (I imagine he means IBM) and that Microsoft wasn't given the opportunity to participate in the document's wording, but instead was asked to sign it as-is.
If you missed it all, here is some of the reporting and commentary on the topic that summarizes the issue:
As the last link from Larry Dignan tells you, the actual document has been a big secret, and reporters and industry analysts who have received it are apparently under embargo to not publish it until Monday, March 30.
I received the document from four different sources and am under no obligation to keep it secret, so I am happy to publish it here for the first time. Please comment with your thoughts. As you can see, there's much ado about nothing. I am not quite sure why the people behind it want to keep it such a big secret. There's nothing controversial about it, in my opinion.
Download the Open Cloud Manifesto as PDF.
UPDATE: This post and the document have received a lot of attention. I previously hosted the document on the Scribd service. It has been viewed more than 2,500 5,500 times on Scribd, last I checked. I have decided, however, to remove the document from Scribd and only post the PDF as a link from this blog. Before any conspiracy theories emerge, I have only done this because of my dissatisfaction with the Scribd service -- I have not been asked to do so by anyone.[At this moment, I can't actually remove the document because Scribd is down. See what I'm saying?]
Posted on March 27, 2009 at 09:48 AM | Permalink | Comments (15) | TrackBack (2)
| |
Selling to the enterprise is a mythical goal in the software industry. The million dollar deal is the stuff of legends, and throughout the '90s and early 2000s, the dream of every sales person. With 4% to 8% commissions (depending on stage of company and sales person stature), who can blame them.
But those days are over. Even before the current recession, the tides were turning. Unless you are Oracle, IBM, HP or a handful of other mega-vendors, you're not going to see 7-figure deals, well, except for the occasional bluebird.
We're now witnessing an increasing trend of bottom-up sales. A casual decision made by developers on a day-to-day basis, not a grand strategy laid out by the CIO. Try-and-buy is the norm, and so are subscription payments and other models that take off the financial burden from the customer and places it on the vendor. Long gone are the days that a large vendor can offload $50 million of upfront software licenses on its customer, add another $150 million in professional services for customization and integration, and 3 years later (with ongoing maintenance and support fees, mind you), leaving the customer with what is essentially shelfware (at which point, to avoid embarrasment, the customer declares victory on the project).
Charlie Federman, a VC I deeply respect from the days he was on the board of directors of GigaSpaces wrote an interesting post, entitled Firing Prospects, with some anecdotal evidence on the phenomenon:
I met today separately with two successful CEO's who, unprompted told
me they were deemphasizing their marketing/sales efforts to Enterprise
accounts; one company is in the application arena, and the other is in
the infrastructure space.
Each told a similar story:
They
don't have the 'patience or resources to go through the hoops' required
in committee sales. Translation, is that they don't want to fund the
direct salesforce/field engineers for the traditional 6 month sales
cycle, where they have to commit the equivalent of hundreds of
thousands of dollars upfront before a decision is made. Moreover, if
the decision is positive, it's normal to wait a few more quarters for
implementation to move forward.
Each stressed the opportunity
cost is simply too high when many alternative channels are present that
are open to a 'fast test/fast purchase' decision cycle. Today, their
biggest issue is prioritizing their time/resources in an environment
where they receive near-instant market feedback from traffic, trial and
conversion statistics. Direct Enterprise sales (as opposed to Business
Development) is being extracted from their company DNA.
The history of the trend away from the enterprise sale is easily traceable. It starts with free-trial CDs; it really picks up steam with downloadable enterprise software (salute to the WebLogic folks). It becomes downright mainstream with open source: Linux/RedHat, Jboss, MySQL and the rest. And now, we come to cloud computing: the final nail in the enterprise sale coffin.
Cloud computing, whether infrastructure-as-a-service, platform-as-a-service or software-as-a-service empowers developers, system admins and others in the rank & file to move quickly with their projects, select the easiest tools, and pay very little for them as they progress through the application lifecycle of development, testing, QA, staging, production and post production (on going maintenance, upgrades and end-of-life). Because it does not involve large capital expenditures, and initially requires very little expenditure of any kind, budgetary approvals can usually be made at much lower levels.
It's an unstoppable trend and many of the software start-ups I am working with are realizing they have an urgent task to move to a SaaS model (if they're not already there).
Of course, the new model represents some new challenges for both the vendors and the customers, but the benefits are so compelling, it's inevitable.
Posted on March 24, 2009 at 08:30 AM | Permalink | Comments (7) | TrackBack (3)
| |
Todd Hoff at HighScalability.com published another excellent article entitled Are Cloud-Based Memory Architectures the Next Big Thing? Chock full of analysis, data, links, examples and references. IMHO, it's a must-read piece for developers and architects.
As I noted in the comments to Todd's post, in addition to the many benefits of memory-based architectures which Todd lists, there is also the cost benefits in the cloud, which I discussed in Cloud Pricing and Application Architecture.
An excerpt from Todd's piece:
RAM = High Bandwidth and Low Latency
Why are Memory Based Architectures so attractive? Compared to disk RAM is a high bandwidth and low latency storage medium. Depending on who you ask the bandwidth of RAM is 5 GB/s. The bandwidth of disk is about 100 MB/s. RAM bandwidth is many hundreds of times faster. RAM wins. Modern hard drives have latencies under 13 milliseconds. When many applications are queued for disk reads latencies can easily be in the many second range. Memory latency is in the 5 nanosecond range. Memory latency is 2,000 times faster. RAM wins again.
RAM is the New Disk
The superiority of RAM is at the heart of the RAM is the New Disk paradigm. As an architecture it combines the holy quadrinity of computing:
Read the whole thing.
Posted on March 17, 2009 at 11:43 PM | Permalink | Comments (2) | TrackBack (0)
| |
Security is consistently brought up as one of the main barriers to the adoption of cloud computing and one of the big challenges it presents. Issues such as security in virtualized environments, multi-tenant architectures and the cloud in general are often discussed with very little real-world understanding of the problems and the potential solutions.
So it's nice to discuss it with someone who actually knows what he's talking about, as James Urquhart and I do with Chris Hoff, the well-respected security expert and blogger, in Show #8 of our cloud computing podcast, Overcast.
Listen to it here.
Posted on March 13, 2009 at 04:23 PM | Permalink | Comments (1) | TrackBack (0)
| |
UPDATE (10/2009): Amazon has updated their reserved instance pricing as of August 2009. Please see my revised numbers and spreadsheets here: Amazon Reserved Instances Update.
Amazon just announced that they are giving an option to pay upfront for Reserved Instances on EC2. See Amazon CTO Werner Vogel's blog for more details.
I set out to understand the economics of Reserved Instances. For that purpose, I created a calculator that allows you to plug in the number of hours you expect to use the AMI during the year, and will then tell you how much money you will save or lose by using Reserved Instances versus pure On-Demand Instances. I share this calculator below as a embedded Zoho spreadsheet (if you cannot see it well, click on the full screen link on the top-right).
The way to use it is very simple. For each instance size, just plug in the number of hours you expect to use the instance during the year in the light blue cells. You will then get the results for both 1 year and 3 year contracts in the corresponding gray cells. If it's a positive number, that's how much you save by paying the upfront Reserved Instance fee. If it's a negative number, it's how much you'll lose.
The default number I put in the hours column, 8760, is 24x365. So if, for example, you run a Large AMI for the entire year non-stop, you will save $1,153 by using reserved instances.
If you are curious to know the break-even point, it is 4,643 hours annually for the 1-Year fee and 2,381 hours *annually* for the 3-Year fee. In other words, if you expect to run an instance for more than 4,643 hours during the coming 12 months (works out to an average of 12 hours a day), you're better off with Reserved, otherwise, stick to On-Demand.
Without further ado:
Feel free to download the spreadsheet, and if you find any errors in it, please let me know.
There are two more important considerations you should take into account in any ROI calculation for this:
I admit I 'm somewhat obsessed with cloud pricing lately, but that's for two good reasons. First, as one of the main value propositions of cloud computing is cost savings, I think it is important that as an industry we examine what the vendors are doing and making sure we're getting it right. Second, for my own selfish reasons. I am working on several projects now where I either need to figure out the economics of using cloud for an end customer (i.e., a company that wants to run its apps in a cloud environment and needs to figure out ROI), as well as for Platform-as-a-Service vendors who are leveraging EC2 under the hood.
UPDATE: Amazon has updated their reserved instance pricing as of August 2009. Please see my revised numbers and spreadsheets here: Amazon Reserved Instances Update.Posted on March 12, 2009 at 03:42 AM | Permalink | Comments (7) | TrackBack (1)
| |
I spoke to the folks organizing the Under the Radar event (April 24, Mountain View, CA), which is focused on cloud computing. They already have a very impressive line-up of companies presenting, but they are looking for more.
Here's the info the organizers sent me:
The current line-up of companies is quite impressive with some of the hot up & comers out there, including Heroku, New Relic, Twilio, Sauce Labs, cTera, Zuora and others. The judges also like an impressive bunch with folks from big companies such as my buddy James Urquhart from Cisco, Joe Weinman from AT&T and Matthew Glotzbach from Google, as well as prominent journalists and venture capitalists.
For those who just want to attend, I understand they are still selling tickets at the early bird price. You can find them here: http://www.acteva.com/booking.
Looking forward to the event and seeing you all there.
Posted on March 11, 2009 at 03:23 PM | Permalink | Comments (0) | TrackBack (0)
| |
I have been giving a lot of thought lately to cloud pricing. As an adviser to companies from both sides of the issue -- cloud (IaaS and PaaS) providers and cloud users (and potential users) -- I've had an interesting perspective on the issue, which I will discuss in this and several future posts.
Here, I want to focus on how cloud pricing models (might) affect application architecture design decisions.
Even in traditional data centers and hosting services, software architects and developers give some consideration to the cost of the required hardware and software licenses to run their application. But more often than not, this is an afterthought.
Last May, Michael Janke published a post on his blog which tells the story of how he calculated that a certain query for a widget installed in a web app -- an extremely popular widget -- cost the company $250,000, mainly in servers for the database and RDBMS licenses.
From my experience, companies rarely get down to the level of calculating the costs of specific features, such as a particular query.
So while this kind of metric-based and rational approach is always advisable, things get even more interesting in the cloud.
In other words, whether you were planning on it or not, your real-time bill from your cloud provider will scream at you if a certain aspect of your application is particularly expensive, and it will do so at a very granular level, such as database reads and writes. And any improvements you make will have a direct result. If you reduce database reads and writes by 10% those charges will go down by 10%.
This is, of course, very different than the current prevailing model of pricing by discrete "server units" in a traditional data center or a dedicated hosting environment. Meaning, if you reduce database operations by 10%, you still own or rent that server. The changes will have no effect on cost. Sure, if you have a very large application that runs on 100 or 1,000 servers, tan such an optimization can yield some savings and very large-scale apps generally do receive a much more thorough cost analysis, but again, typically as an afterthought and not at such a granular level.
Another interesting aspect is that cloud providers may be offering a different costs structure than that of simply buying traditional servers. For example, they may be charging a proportionally higher rate for local disk I/O operations (Amazon charges $0.10 per million I/O requests to EBS). Something that would barely go into consideration when buying or renting discrete servers (whether physical or virtual).
Which brings us to the topic of this post. Cloud pricing models will affect architectural choices (or at least they should). Todd Hoff discussed this issue in a HighScalability.com post entitled Cloud Programming Directly Feeds Cost Allocation Back Into Software Design:
Now software design decisions are part of the operations budget. Every algorithm decision you make will have dollar cost associated with it and it may become more important to craft algorithms that minimize operations cost across a large number of resources (CPU, disk, bandwidth, etc) than it is to trade off our old friends space and time.
Different cloud architecture will force very different design decisions. Under Amazon CPU is cheap whereas under [Google App Engine], CPU is a scarce commodity. Applications between the two niches will not be easily ported.
Todd recently updated this post with an example from a Google App Engine application in which:
So what architectural changes can you make to reduce costs on the cloud? Here's one example:
A while back I wrote a post about GigaSpaces and the Economics of Cloud Computing. GigaSpaces has been -- for those of you new to my blog -- my employer for the past 5 years. I gave five reasons for why GigaSpaces will save costs on the cloud, but what I discuss above adds a sixth one. Because GigaSpaces uses an in-memory data grid as the "system-of-record" for your application, it significantly reduces database operations (in some cases a 10-to-1 reduction). In AWS, this could reduce significant EBS and other charges. It also happens to be good architectural practice. For more on that see David Van Couvering's Why Not Put the Whole Friggin' Thing in Memory?
Taking this approach as an example, it could have saved a significant portion of Michael Janke's $250,000 query off the cloud, and perhaps an even bigger porportion on the cloud.
If anyone has other ideas on how architectural decisions could affect costs on the cloud, please share them in the comments.
P.S. This post is another example of Why (and What) Every Business Exec Should Know About Cloud Computing.
Posted on March 10, 2009 at 09:26 AM | Permalink | Comments (11) | TrackBack (0)
| |
Just did a trend search on job site Indeed.com for "cloud computing". Whoa.
Job postings are often a leading indicator for expected business activity, and this graph speaks for itself. Cloud computing is clearly more than hype when so many companies are hiring for cloud-related positions. It's also interesting to note some of the companies that show up when you run the search. You get a little bit of insight into the plans of companies such as Dell, Yahoo, Intuit and VMWare.
You can also subscribe to the cloud computing job search RSS feed.
Posted on March 04, 2009 at 12:08 AM | Permalink | Comments (11) | TrackBack (0)
| |
James Urquhart and I recorded show #7 of Overcast, our podcast series on cloud computing. It's up and available here for your listening pleasure.
In this episode we have a discussion with Javier Soltero, CEO of Hyperic. Reposting the show notes:
Posted on March 03, 2009 at 10:47 PM | Permalink | Comments (0) | TrackBack (0)
| |
Thinking Out Cloud is a blog about cloud computing and the SaaS business model written by Geva Perry.
Recent Comments