Here I want to look at the more interesting world of low-latency. A further post will cover co-location.
This is a companion piece to What's a FIX network? Part one... and What is a FIX network, part three
Firstly - what does low-latency mean? A purist may suggest that it's all about being faster to market - to receive market data quicker than the next firm, process it faster and get an order to market faster. A cynic may conclude that there is a more untoward basis. This post will not cover that - a good starting point for the more numerate is Nanex, for the more literate Themis Trading
So where did low-latency come from? Many years back (>10) I consulted to a bulge-bracket bank with a large equity derivatives group. They would take a derivatives position and then want to hedge that in the cash market. In a simple example, if a bank was trading futures on a liquid equity instrument index they may want to take a position on the underlying instruments in the index. So - when the bank is pricing the derivative there's a relationship to the price of the underlying hedge. A bank would therefore seek to ensure that there is the smallest amount of slippage in price from point of decision to execution of the cash trades. The subject of minimising slippage now has it's own area of expertise - Transaction Cost Analysis (TCA) which will be covered in a further post.
In those days a bank would have a comparatively low bandwidth connection from their servers to the relevant stock exchange. So orders would hit the exchange in a dribble. This is a sub-optimal execution strategy. So the better EDG groups in the bigger firms invested money in systems to
- increase bandwidth to the exchange
- reduce execution slippage by improving code performance and design
One of the interesting points here is that for many folks the first time we saw what is now termed algorithmic execution was in the context of EDG rather than regular equities trading. The first auto-trading application I saw was a time-slicing VWAP algorithm that was used in a bank EDG group and which was re-tasked for the bank buy-side cash equity clients to utilise, initially as desk directed flow and later as FIX based order flow.
Increasing bandwidth is relatively simple in concept but as many of us have learnt (the hard way) not all providers of bandwidth are equal. Indeed, if you ever get the chance it can be fascinating to compare the prices paid for similar bandwidth and the latency/jitter/service level of different providers. By fascinating, I mean painful when you realise that the big name firm that you have signed up with has a terrible latency and/or jitter and/or SLA. This blog, while happy to discuss firms in a positive light, does not engage in "name and shame" and so this silence protects the guilty. In the tradition of computer science textbooks, I leave it as an exercise for the reader to evaluate the level of performance of bandwidth providers...
This realisation - that not all bandwidth is created equal - spurred the creation of a newer breed of firms that provide bandwidth typically as part of an overall product offering including risk management and managed services. The new generation includes a number of firms such as
- Fixnetix
- Quanthouse - acquired by S&P
- FTEN - acquired by NASDAQ OMX
So what's different in terms of physical connectivity infrastructure? I'm not the right person to write up a full list of differences but I can summarise a few points:
- Customer focus
- Rapid delivery
- Concrete ability to measure latency and jitter using tools such as Corvil
If I was being cheeky I would also add that the newer firms are less likely to spend lots of money sponsoring conferences and talking loudly about their products, they tend to be more likely to get on with delivery to clients...
Comments
Post a Comment