In its April 23 issue, InformationWeek's cover says: "Extreme Transactions", which refers to the story entitled: Business At Light Speed
Wall Street's attempt to shave milliseconds off transactions pushes the limits of computer science.
I won't go into the article in great depth, but it basically discusses the trend of the past few years, wherein Wall Street firms on both the buy and sell-side have unprecedented needs for low latency trading execution and market data access due to changes in regulation, the proliferation of electronic trading and the advent of algorithmic trading. So basically you now have thousands and thousands of machines buying and selling stocks and other securities from other machines based on extremely complex (and automated) computer models. So it has become a latency game -- low-latency, that is.
The IW piece focuses on what firms have been doing about at the network infrastructure level, such as bringing their own data centers physically closer to the exchanges and establishing direct access to market data, as opposed to using third-parties such as Reuters and Bloomberg. All in a grand effort to shave a few milliseconds off.
It's a good story, but one bit especially caught my eye:
Therein lies the rub for ultrafast trading: Once you hit physical limits to data-transmission speeds, where do you go from there? "If anybody knows how to get a signal transmitted faster than the speed of light, I'd like talk with them," says Cummings.
There are two schools of thought on this issue. One is that traders, exchanges, and brokers must shave latency from other parts of the system--in the applications they use, for instance--and that the race will continue.
This is already happening. That's why many Wall Street customers are looking at solutions such as GIgaSpaces for their applications and are turning away from existing approaches such as J2EE. It is an attempt to shave off any possible latency in the applications themselves and it is a growing trend.
Those of you who follow this blog or the GigaSpaces Blog know how we achieve such extreme low latency with Space-Based Architecture:
- Access and process data and events locally and in-memory
- Collapse the tiers, thereby eliminating network hops
- Co-locate services with shared memory
- Scale-out by adding more processing units, as each is self-sufficient (Shared-Nothing Architecture)
We're seeing this approach widely adopted on Wall Street (and in other industries such as telco) with solutions such as GigaSpaces, and in very demanding web applications such as Flickr, Google, Twitter and others using various technologies -- memcache, mapreduce -- which all amount to the roughly the same idea of caching and partitioning. BTW, for a great blog summarizing a lot of these architectures see this from Peter Van Dijck.
Extreme Transaction Processing and Extreme Analytics Processing are growing areas and we're going to see interesting developments in the coming months and years.
UPDATE: Nati Shalom blogs on the topic over at the GigaSpaces Blog. For some reason the URL in Nati's comment belo isn't working, but you can find his post here.