vol 8, issue 6

High-frequency trading systems - speed vs regulation

18 June 2013
By Chris Edwards
Share |
A trader at work

In August 2012 Knight Capital Group’s technology issue cost it $440m - and its corporate independence

Two graphs showing high-frequency trading since 2006

Despite the enthusiasm traders are showing for investing in HFT, the market is one of diminishing returns

Percentage of equity trading volume from high-frequency trading

Despite the enthusiasm traders are showing for investing in HFT, the market is one of diminishing returns

Traders at work

If the financial arms race continues the problem that faces regulators is, will they still be able to keep up?

Altera’s Stratix V FPGAs

High-end FPGAs such as Altera’s Stratix V optimise trading platforms for faster HFT environments

In their obsessive pursuit of high-value market advantage, stock traders are pushing IT to its limits - and the lawmakers don't like it. New laws may aim to slow the systems down, but are regulators willing and able to acquire the technology they need to keep up with the City's need for speed?

Bogus message, denial-of-service (DoS) attacks, and spam - lovely spam, wonderful spam... It may sound like everyday life on the Internet, but it's also what's happening in the supposedly highly regulated networks of financial exchanges as traders engage and enrage each other in a souped-up high-performance computer-driven 24/7 battle for profit.

The two worlds have even collided in recent months when the Twitter feed of a news agency was hacked and started posting fake news. Stock prices plummeted for a brief period until traders realised what was really happening; but the potential for future spates of disinformation to cause wobbles in financial systems is now proven.

US global financial services firm Knight Capital Group found out the hard way. In August 2012, the company's computers started making bizarre trading decisions, quadrupling the price of one company - Wizzard Software - as well as bidding-up the price of much larger entities, such as General Electric. In a statement released afterwards Knight called the problem "a technology issue" but it was one that cost some $440m, forcing it to raise close to that amount from investors to stay in business before agreeing to a merger with international trading company Getco.

Knight Capital Group used computer models to decide when and how to trade stocks. In the autumn, the US Securities and Exchange Commission (SEC) launched an investigation to decide whether Knight tested its computer systems properly before loading code designed to talk to a new trading platform used by the New York Stock Exchange (NYSE). A couple of years earlier, algorithmic trading came under the spotlight when US stock prices suddenly plunged for no apparent reason. The so-called 'Flash Crash' (see http://bit.ly/eandt-flashcrash) saw blue-chip shares traded for a single cent.

Traders looked on in horror as the markets seemed to disintegrate before their eyes and scoured the newsfeeds to find some reason for the sudden change in fortune. Nothing had gone wrong except for the trading systems themselves - once order was restored, prices returned to previous levels.

Even now, no one can point a finger at exactly went wrong: the answer seems to be emergent behaviour with major contributory factors. The SEC and Commodities Trading Commission (CFTC) issued a report five months after the Flash Crash, which drew intense criticism from some quarters, not least because its critics claimed the two organisations lacked the technological infrastructure to analyse what went wrong in any detail.

'Arms race' in fast trading

In September 2012 David Lauer, consultant at Better Markets, told the US Senate committee: "In the US equity market, we have seen first-hand glimpses of what can happen as overly complex systems interact in non-linear ways." A review by the CFTC argued that much blame lay with high-frequency trading (HFT) specialists: traders that use parallel computers - and sometimes dedicated hardware - to monitor the gushing feed of quotes and trades that exchanges generate, and then use patterns within them to make a huge number of very small, very fast trades. For HFT firms, processing latency is the foe in stock-and-options trading. The quicker trading algorithms see the trades, the faster they can react to them.

The result has been something akin to an arms race built on computer and network hardware. Firms pay a premium to have their own high-speed computers sitting as close as possible to the trading venues in colocation facilities to minimise the communications time putting those using the standard 'consolidated' feed - which first combines the many different data streams provided by exchanges before the updates are relayed - at a disadvantage. Colocation and fast parallel-processing hardware lets the HFT systems act rapidly on any incoming news of trades.

Then, in 2010, a $4bn sale of futures contracts based on the Standard & Poors 500 index, apparently broke the system. The attempt to sell-off a large position ran into trouble as there were not enough genuine buyers. Having bought into the contract, it then became clear that with too few buyers prices would fall. The algorithms started to regard the bundle of contracts as a 'hot potato'. They continually bounced contracts between each other in the hope of not being the last one to hold as the price crashed - the assumed link between the S&P 500 futures and the underlying stocks then took out numerous blue-chip stocks. Although HFT algorithms did not trigger the initial decline, the CFTC concluded they accelerated it to lightning speed.

In his testimony to the US Senate, Lauer pointed to abusive use of the data feeds with techniques known as 'quote stuffing'. It is a common and legal tactic to publish thousands of quotations in the hope of sniffing out another trader's position; they are quickly withdrawn if there is no response. Surprisingly perhaps traders are not penalised for offering up what are effectively 'fishing' exercises for transactions, as they are charged for executed trades not quotes, making the dummy quotes effectively free.

Quote stuffing goes further. It involves generating a huge volume of quotations on a given stock to "slow down traders with inferior technology", according to Lauer. Trading software firm Nanex called one day's unusual activity around Bank of America's stock on the 2 November 2012 the 'Denial of Service Algo'. From the morning until just after lunch, while quotations hovered around 1,000 to 2,000 per second, trading continued more or less normally. But there were long bursts in which close to 20,000 quotations spammed the exchange's real-time data feed, effectively blocking all but a few actual trades.

Nanex has published numerous incidents of what its people reckon are mini flash-crashes - sudden irregularities in trading caused by HFT. However, in a 2011 Foresight report for the Department of Business, Innovation and Skills (BIS), Terrence Hendershott of the Haas finance group at the University of California at Berkeley argued that the 2010 flash crash was mild compared to the Black Monday meltdown of 1987. "One can assert that October 1987 would have been a brief blip," Hendershott wrote, "if HFT had been prevalent [at the time]".

Seeing HFT as the problem, legislators are beginning to act, as seen with the recent US Senate hearings. Across the Atlantic, MEPs in the European Parliament voted for a bill that could curb the more abusive activities of algorithmic trading. The measure restricts the speed with which orders can be withdrawn from the market - they would have to remain on the system for more than 100ms. Mao Ye, Chen Yao, and Jiading Gai, researchers from the University of Illinois at Urbana Champaign, published a paper last August in which they claimed even orders with a life of just 50ms would be a reasonable limit without affecting the cost of trading. In their research they found up to 40 per cent of orders are cancelled in less than 50ms.

However, the EU legislation, if implemented, will not stop rapid-fire trading, just attempt to rein it in a little.

"Technology that was pioneered by HFT firms is now becoming mainstream," explained Dushyant Shahrawat, senior research director of financial services intelligence provider TowerGroup in a recent seminar. "Large firms, usually mutual firms, are basically trying to benchmark themselves against what is cutting edge. They have been looking at what HFT firms have been doing".

People such as Andrew Brooks, head of US equity trading at financial adviser T Rowe Price Associates, argue that co-location has created an uneven playing field that "favours those who can and will pay for it". Others argue that the advantage of colocation with exchanges is fast diminishing. Successful trading strategies generally wind up unsuccessful because too many jump on the bandwagon.

Networks of influence

Tabb Group's analysis of profitability for HFT strategies peaked in 2009 at $7.2bn, when HFT trades accounted for 61 per cent of market volume, dropping to $1.8bn in 2012 with only a 10 per cent decline in HFT's share of trading volume. To try to extend the reach of HFT, computers increasingly use trading signals from exchanges thousands of miles apart and with an increasingly diverse range of data to eke out a trading advantage using statistical arbitrage. To be able to react faster, financial users are clamouring for low-latency connections. At the speed at which the software attempts to operate, the speed of light itself is a significant impediment.

To link the two main financial centres in the US - Chicago and New York - Spread Networks dug its own trench, "the straightest and shortest route possible". The company claims it cut round-trip latency to 13.1ms. The 825-mile trench contains fibre-optic cable that lets trading houses, unsatisfied with the latency that the existing telecom infrastructure could give them, link their computers together. Even after completing the initial connection, in late 2012 the company shortened the route further in some places to shave another 100's off.

The next move may be offshore. Researchers at the Massachusetts Institute of Technology (MIT) came up with a number of ideal computing locations around the world based on their relative proximity to trading venues, some lay in the North Atlantic and Indian Ocean. Oil rigs that have emptied their wells could find new uses as homes for trading supercomputers.

Even then, at some point, the latency advantage runs out. Tradeworx CEO Manoj Narang says: "The latencies in high-performance trading are rapidly approaching zero. There will be differentiation [between trading strategies], but speed is becoming less of a differentiator. The development of novel trading signals will matter more and more."

The result is likely to be an explosion of data, as if there is not a surfeit of feeds to process already. Today's financial systems have to move and store massive amounts of data. Narang says the feeds his company analyses for its own trading system "generate around 500GB per day of raw data and 120GB of compressed data, with another 120GB from proprietary trading logs. You can't analyse the compressed data directly: you need to pre-process. So we generally expend another 200GB of storage on that task. None of this includes backing up our data or snapshotting data. High-frequency trading strategies have to examine massive amounts of data to generate predictive signals". The systems will not just rely on the feeds generated by the LSE, NYSE and their peers. Even your tweets will be read by tomorrow's algorithms to determine whether to buy and sell, and for how much. Changes in sentiment expressed on social media could be enough to trigger sudden changes in price. It's unclear as to whether the use of many more signals to determine when to trade will increase volatility.

The April 2013 hack on the Associated Press Twitter feed triggered a 140-point drop in the Dow Jones index. Although prices recovered quickly the situation indicated how sensitive markets are to information disturbances. And this is in markets that create volatility simply from their own internal information feedback loops. According to Jean-Philippe Bouchaud, chairman of Capital Fund Management and professor of physics at the École Polytechnique, just 5 per cent of the movements in stock prices today can be attributed to news of events outside the markets. The vast majority of jumps are caused by feedback loops from within the system. Bouchaud argues in his research work that the market structure itself is fragile because liquidity - the ability to satisfy both buyers and sellers - disappears so readily, as it did in May 2010.

Even if HFT itself is brought under control, unforeseen circumstances could still have a devastating impact on algorithms, the companies operating them, and the taxpayers. "Automated non-HFT trading [is] more of a concern for stability as, at times, non-HFT [strategies] attempt to build or liquidate large positions quickly, something that occurred on 6 May 2010," wrote Haas's Terrence Hendershott. He pointed to the collapse of Long-Term Capital Management more than a decade ago and the events of August 2007 - the 'Great Quant Meltdown' - as examples of situations where highly profitable strategies suddenly go into reverse because market conditions change.

To try to avoid unwittingly damaging the markets and avoid running up massive debts when the markets behave oddly, Manoj Narang says Tradeworx uses Amazon's cloud servers to test algorithms against historical trading data. "For high-frequency trading strategies we do not launch into production until we simulate and back-test them," Hendershott at Haas revealed. "And you want to run strategies against real-world conditions. Our exchange market simulator is used to replicate the experience of the market system and all the regulations that go along with as well as the latencies of transferring data. We simulate all our trading strategies on this platform before it goes into production, not just for normal operation but how they would fare under anomalous conditions."

The missing link here is to the regulator, says Tradeworx's Narang - but that is set to change: "The other big criticism of high-frequency trading is regulators cannot police the market because they can't see the market the way technically-advanced traders can. We've thought of a way to do that: contract to the SEC to give access to those systems to the regulators."

The problem that faces regulators is that, if the financial arms race continues at its current pace, will they still be able to keep up? As financial trading systems incorporate more data from the web, they become more vulnerable to the attacks that plague the Internet. With a clear profit motive, such attacks are likely to be far more subtle than the AP Twitter hack. Rather than target a single high-profile account, hackers might attempt more widespread, low-level attacks that attempt to push share prices in one direction, cashing in when order is restored.

Further information

Share |

Technology briefing: Want to maximise profitability? Just optimise everything...

For many in the world of algorithmic trading and quantitative analysis, the weapon of choice is an array of blade servers packed with Intel Xeons or AMD Opterons. A lot of compute power is sitting in colocation facilities although some companies such as Tradeworx augments its processing with the servers-for-rent in the Amazon Cloud.

Some companies operate standard high-performance blade servers; others are turning to custom hardware to try to bring processing and communications latencies down. Programmable-logic devices could provide the answer as they can reduce the delay that processors suffer in having to shuffle data to and from memory and also be reconfigured to perform complex arithmetic in one cycle. Languages such as OpenCL that provide a way to write highly parallel code that can be mapped onto graphics processors or field-programmable gate arrays (FPGAs), which have the ability to change logic circuits on the fly, are providing a way to move away from conventional compute servers. Steve Dodsworth, sales director EMEA at programmable logic device maker Altera, says: "It is early days. We are now meeting with senior people in the Square Mile about using OpenCL and FPGAs around servers. It's OpenCL that's doing it... The quantitative-analysis people are being educated by the Intel people to be hardware aware. To get the most of the Intel architecture they realise they have to be aware of the hardware bottlenecks."

However, Intel is offering an approach that plots a course between standard processors and more esoteric hardware architectures with the Xeon Phi - a massively parallel version of its processor architecture developed for scientific computing applications.

Network equipment vendors have also reacted to the need for higher performance and lower latency. Companies such as Arista Networks, Gnodal, and Juniper Networks have built network switches that are able to move packets of data around networks with shorter delays than conventional designs. Even the network protocol stack itself is a target for optimisation. Solarflare has a TCP/IP stack that streamlines the movement of data from network into computer.

Briefing: what are the City regulators' options?

1 Slam the brakes on. 

If markets react more slowly to events, and are prevented from going into death spirals simply by being unable to place enough orders in a short space of time, some of the worst problems could be avoided. Bullish traders may still find other techniques, such as launching orders in many different venues, or from multiple sources, in order to comply with the rules, but achieve some of the advantage they have now.

2 Watch and regulate abuses more closely. 

Instead of regulating HFT as a whole, the best approach may be to watch for practices that cause problems, and then devise rules that attempt to stop them. However, in a highly chaotic system, adding complex rules may yet have unintended consequences.

3 Change the market structure itself. 

This recognises that problems with HFT have largely been localised to the US, where a complex network of trading venues may have exacerbated volatility.

Related forum discussions
forum comment To start a discussion topic about this article, please log in or register.    

Latest Issue

E&T cover image 1411

"This issue we honour a national hero, and the subject of Benedict Cumberbatch's latest film, codebreaker Alan Turing"

E&T jobs

E&T Marketplace

The essential source of engineering products and suppliers.

Subscribe

Choose the way you would like to access the latest news and developments in your field.

Subscribe to E&T