111 37 3MB
German Pages 146 [147] Year 2013
Studienreihe der Stiftung Kreditwirtschaft an der Universität Hohenheim
Arne Breuer
An Empirical Analysis of Order Dynamics in a High-Frequency Trading Environment
Verlag Wissenschaft & Praxis
An Empirical Analysis of Order Dynamics in a High-Frequency Trading Environment
Studienreihe der Stiftung Kreditwirtschaft an der Universität Hohenheim Herausgeber: Prof. Dr. Hans-Peter Burghof
Band 49
Arne Breuer
An Empirical Analysis of Order Dynamics in a High-Frequency Trading Environment
Verlag Wissenschaft & Praxis
Bibliografische Information der Deutschen Nationalbibliothek Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über http://dnb.dnb.de abrufbar.
ISBN 978-3-89673-635-2 © Verlag Wissenschaft & Praxis Dr. Brauner GmbH 2013 D-75447 Sternenfels, Nußbaumweg 6 Tel. +49 7045 930093 Fax +49 7045 930094 [email protected] www.verlagwp.de Druck und Bindung: Esser Druck GmbH, Bretten
Alle Rechte vorbehalten Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Jede Verwertung außerhalb der engen Grenzen des Urheberrechtsgesetzes ist ohne Zustimmung des Verlages unzulässig und strafbar. Das gilt insbesondere für Vervielfältigungen, Übersetzungen, Mikroverfilmungen und die Einspeicherung und Verarbeitung in elektronischen Systemen. Printed in Germany
5
Vorwort Die vorliegende Arbeit entstand w¨ahrend meiner T¨atigkeit als wissenschaftlicher Mitarbeiter und Doktorand am Lehrstuhl f¨ur Bankwirtschaft und Finanzdienstleistungen der Universit¨at Hohenheim. Das Institut f¨ur Financial Management hat die im Dezember 2011 eingereichte Arbeit unter dem gleichen Titel als Dissertation zur Erlangung des Grades eines Doktors der Wirtschaftswissenschaften (Dr. oec.) angenommen. Diese Arbeit w¨are nicht m¨oglich gewesen ohne die Unterst¨utzung der Stiftung Kreditwirtschaft. Ihren f¨ordernden Mitgliedern m¨ochte ich an dieser Stelle f¨ur die bemerkenswerte Unterst¨utzung des Lehrstuhls danken. Herrn Prof. Dr. Hans-Peter Burghof m¨ochte ich daf¨ur danken, dass er mir den Freiraum und die Unterst¨utzung gegeben hat, diese Arbeit zu erstellen. Herrn Prof. Dr. Dirk Hachmeister danke ich f¨ur die Erstellung des Zweitgutachtens und Herrn Prof. Dr. Bertram Scheufele, dass er den Pr¨ufungsvorsitz meines Kolloquiums u¨ bernommen hat. Mein besonderer Dank gilt der Nasdaq Stock Exchange, ohne deren großz¨ugige Bereitstellung ihrer Orderbuchprotokolle diese Arbeit nicht h¨atte entstehen k¨onnen. Weiter danken m¨ochte ich meinen Eltern und meinen Geschwistern, auf die ich mich immer verlassen konnte und kann. Viele Wegbegleiter haben meine Stuttgarter Zeit unvergesslich gemacht, von denen ich nur einige nennen kann; ich danke besonders Ulli Spankowski, Oliver Sauter, Felix Geiger, Francesco Riatti, Stefan Haffke, Christian D¨aubel, Daniel Schwarz, Dirk Britzen, Jochen Winkler, Christian Creutz und Alexander
6 Dannenmann. Nat¨urlich gilt mein Dank auch all jenen, die mich die Lehrstuhlzeit so sehr haben genießen lassen und mich bei meiner Arbeit unterst¨utzt haben. Besonders danken m¨ochte ich hier Barbara Flaig, Dirk Sturz, Katharina Nau, Steffen Kirsch, Sebastian Schroff, Barbara SpehFreidank, Jutta Sch¨onfuß, Julian Stitz, Felix Prothmann, Miriam Gref, Timo Johner, Thomas Rival und Nico Hettich. Abseits von Stuttgart und Hohenheim sollen an dieser Stelle Jens K¨uck, Gabriel Thebolt, Caitriona Kinsella, Deirdre Gallagher, Kathryn O’Mahony, Aoife Wallace, Maeve O’Connor, James Gaffney, Thomas Konrad und Arnoud Dammers nicht unerw¨ahnt bleiben; danke, thank you, dank je wel und go raibh maith agat.
Wustrow, im Oktober 2012
Arne Breuer
7
Contents List of Figures
9
List of Tables
11
List of Abbreviations
13
List of Variables
15
0 Introduction
17
1 Literature Overview 1.1 Introduction . . . . . . . . . . 1.2 Market Microstructures . . . . 1.2.1 Quote-Driven Markets 1.2.2 Order-Driven Markets 1.3 Algorithmic Trading . . . . . 1.4 Conclusion . . . . . . . . . .
. . . . . .
23 24 26 28 31 40 51
. . . . .
53 54 56 61 61 66
. . . . . .
. . . . . .
. . . . . .
2 Order Microstructure 2.1 Introduction . . . . . . . . . . . . . 2.2 Previous Work . . . . . . . . . . . . 2.3 Empirical Evidence from NASDAQ 2.3.1 Methodology and Data . . . 2.3.2 Add Order → Delete Order .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
. . . . . .
. . . . .
Contents
8
2.4
2.3.3 Add Order → Add Order . . . . . . . . . . . . . . 2.3.4 Delete Message → Add Message . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .
3 Order Nanostructure 3.1 Introduction . . . . . . . . . . . . . 3.2 Literature Review . . . . . . . . . . 3.3 Dataset . . . . . . . . . . . . . . . 3.4 Empirical Results . . . . . . . . . . 3.4.1 Limit Order Lifetimes . . . 3.4.2 Order Revision Times . . . 3.4.3 Inter-Order Placement Times 3.5 Conclusion . . . . . . . . . . . . . 3.A Appendix . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
75 80 85 89 90 93 97 104 105 115 121 125 130
4 Conclusion
131
Bibliography
137
9
List of Figures 1.1 1.2
Algorithmic trading on the Deutsche B¨orse, 2002-2008 . . Forms of algorithmic trading . . . . . . . . . . . . . . . .
25 41
2.1 2.2 2.3 2.4 2.5 2.6
Exemplary histrograms of limit order lifetimes . . . . Boxplots of ‘special’ times of limit order lifetimes . . Exemplary histograms of inter-order placement times Boxplots of ‘special’ inter-order placement times . . Exemplary histograms of order-revision times . . . . Relative share of order-revision times . . . . . . . .
. . . . . .
68 71 77 79 82 83
Limit order lifetimes on the nanoscale . . . . . . . . . . . Cumulative average share of limit order lifetimes . . . . . Comparison of order lifetimes of stocks and ETFs . . . . . Average limit order lifetimes on 6 May 2010 . . . . . . . . Order-revision times on the nanoscale . . . . . . . . . . . Average order-revision times for stocks and ETFs . . . . . Order revision times over the trading day . . . . . . . . . . Average share of revision times in different intervals on 6 May 2010 . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Exemplary histograms of inter-order placement times on the nanoscale . . . . . . . . . . . . . . . . . . . . . . . . 3.10 Cumulated average inter-order placement times for different intervals . . . . . . . . . . . . . . . . . . . . . . . . .
107 108 112 114 118 119 120
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8
. . . . . .
. . . . . .
122 124 126
10
List of Figures A.1 Average limit order lifetimes of stocks and ETFs without non-trading hours . . . . . . . . . . . . . . . . . . . . . . 130
11
List of Tables 2.1 2.2 2.3 2.4 2.5 2.6 3.1 3.2
Ticker symbols for the empirical analysis in Chapter 2 . . Proxies to analyse high-frequency trading . . . . . . . . . Results for test of significance of peaks in limit order lifetimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regression results of the share of limit order lifetimes . . . Results of tests of significance of peaks at ‘special’ interorder placement times . . . . . . . . . . . . . . . . . . . . Significance of peaks of order-revision times at ‘special’ times . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63 65 73 74 80 84
Ticker symbol, type, name, and number of limit orders of the stocks and ETFs used in the empirical analysis. . . . . 99 Regression results of the scale parameter β0 . . . . . . . . 110
13
List of Abbreviations ASX ATF ATP BBO CFTC CIC EC ECN ETF FINRA GDP HFT IBEX MiFID MFT ms µs NASDAQ NAV ns NYSE OLS
Australian Stock Exchange Alternative Trading Facility Automated Trading Program Best Bid and Offer Commodities Futures Trading Commission Constant Initial Cushion European Commission Electronic Communication Network Exchange-Traded Fund Financial Industry Regulatory Agency Gross Domestic Product High-Frequency Trading Iberian Exchange Markets in Financial Instruments Directive Multilateral Trading Facility Millisecond(s) (i.e., 10−3 seconds) Microsecond(s) (i.e., 10−6 seconds) National Association of Securities Dealers Automated Quotations Net Asset Value Nanosecond(s) (i.e., 10−9 seconds) New York Stock Exchange Ordinary Least Squares
14 RegNMS s SEC SWX TAQ TWAP VSE VWAP
Regulation National Market System Second(s) United States Securities and Exchanges Commission Swiss Stock Exchange Trades And Quotes Time-Weighted Average Price Vancouver Stock Exchange Volume-Weighted Average Price
15
List of Variables α β β0 δ d ϵ f (·) F
g(·) i I j Lt n N p φ Plim P0 Plast
Intercept parameter Regression parameter Scale parameter of Weibull distribution Dummy variable Estimation result for jump dummy Error term Limit order risk function Probability that the last price of the trading window is equal to or below Plim conditional on the arrival of a liquidity trader Goodness-of-fit function Probability of the arrival of an informed trader Indicator for the arrival of an informed trader Probability of the arrival of a liquidity trader Order lifetime of t {milli − |micro − |nano−} seconds Some value ∈ N Size of dataset Shape parameter of Weibull distribution Fit parameter for exponential distribution Limit price Current price of the security Last price of the security
16 Rt S(·) t t0 td tp tmax T U x x60
Revision time of t {milli|micro|nano} seconds Survival function Time variable Initial time Time of order deletion Time of order placement Length of longest interval Time variable Indicator for the arrival of a liquidity trader Some value ∈ R Some value ∈ {60, 120, 180, . . . }
17
Chapter 0
Introduction Over the last decades, financial markets have changed drastically, and not only once. In fact, there were two or three major changes that have radically altered the way how assets are traded. In their early days, many stock exchanges were Walrasian markets with more or less fixed points in time where the market-clearing price was determined in an auction. Trading was therefore sequential. Because market participants performed the price discovery in person at the stock exchange, this market structure is called a call market. It gave way to the dealer market, where appointed market-makers continuously quote prices for which they are willing to buy or sell stocks during the opening hours of the stock exchange. Some major stock exchanges changed again in the 1990s into order-driven markets. This means that individuals transmit their intention to trade by stating a price and a quantity for a specific stock. If no other trader executes the order, the stock exchange inserts it into its limit order book. The trader can then choose between deleting or waiting for execution. In the late 1990s and early 2000s, electronic communication networks (ECNs) introduced high-speed open limit order books that promoted the increased use of a new market participant: algorithmic trading. Today, the
18 various forms of algorithmic trading are important determinants of trading. Because algorithmic trading is a huge topic for today’s stock exchanges and market participants, this thesis will help investigate this clandestine actor. Algorithmic traders are computer programs with cash or assets that trade independent of human interaction. The first algorithmic trading engines appeared in the second half of the 1980s. These rather simple algorithms followed pairs of companies whose share prices developed similarly in the past. If the prices began to differ significantly, the algorithm bought (went long in) the relatively lower-priced stock and sold (went short in) the relatively higher-priced stock. With this strategy, it bet that the spread between the two stock prices would decrease to the usual level (Pole, 2007). The range of sophistication of these algorithms is much greater in today’s markets, there is a broad diversity of algorithmic trading strategies. For example, rather simple programs work off larger positions of shares with minimal market impact and base their trading strategy on benchmarks, for example the volume-weighted average price (VWAP). More advanced programs look for arbitrage opportunities or produce unmarketable limit orders to benefit from order-rebate programmes. Other algorithms try to extract other algorithmic trading strategies from the order flow to make a profit by front-running them. To avoid becoming victims of front-running, creators of algorithms take great care to be as stealthy as possible. For other parties, the underlying models ought to remain in a black box. The algorithms absorb all kinds of market data, process it in an undisclosed way, and produce limit orders. This makes a direct measurement of the extent of algorithmic trading difficult if not impossible; algorithmic orders do not look any different from orders from humans in order book data. Estimations for algorithmic trading in the US range from 50 to 70 per cent of the total market activity. These figures are diverse because no hard numbers exist and sometimes mean different things altogether: there is a difference between total order flow, traded volume, number of trades, and other factors.
Chapter 0. Introduction
19
To summarise, it is doubtful that anyone fully understands what impact algorithmic trading has on market microstructures. However, algorithmic trading has obviously to be analysed, and it raises questions. Do algorithms make the market more stable or more volatile? What happens to market liquidity and other market factors? What happens to the price discovery? Do traditional measures still work in an environment where algorithms insert and delete limit orders within microseconds? A lot of research is necessary to answer these new questions. Current empirical evidence suggests that algorithms improve liquidity and contribute to the market. Nonetheless, many market participants doubt that this counts for all market environments. Many blame algorithms for causing the famous flash crash of 6 May 2010, when the Dow-Jones Industrial Average lost and regained 1,000 points or around nine per cent within some 15 minutes. Another controversial strategy is the so-called quote-stuffing. Within a fraction of a second, algorithms pour thousands of non-marketable limit orders into the market. Their aim is to overload the stock exchange’s matching or reporting algorithm or the computers of their competitors to gain a temporary advantage. While researchers and regulators alike still struggle to draw definitive conclusions, human traders are often sceptical toward their electronic counterparts. One reason is the opaqueness of algorithmic trading. It is easy to make algorithmic trading the scapegoat for market irregularities. Some allegations may be debatable – for example, quote-stuffing or rebate-hunting. However, algorithms serve as a tool to relieve human traders of chores such as reducing a large position in a ‘normal’ market environment. Without algorithms, some tasks such as best execution, where the trader has to search for the best possible bid or offer, are only possible with great cost. Thus, the interest of regulatory institutions in algorithmic trading and the number of scientific papers concerning algorithmic trading have increased in the last few years. However, it is not possible to tell human trading from algorithmic trading. To analyse algorithmic trading, researchers usually have to rely on special datasets to get a grip on it. I will per-
20 form an empirical analysis of the effects of algorithmic trade machines on the anonymous order book of the National Association of Securities Dealers Automated Quotations, better known as the NASDAQ stock exchange. With this analysis, I help find a way to measure algorithmic trading without access to privileged data. The aim of this thesis is, therefore, to reveal and analyse the structure of limit orders that most likely originate in algorithmic trading. The results of this thesis can have repercussions on market models, that to date mostly ignore modern forms of algorithmic trading. In addition to this, it is not unlikely that it will be possible to create an approximate real-time measure for the extent of algorithmic trading; however, for this attempt it is necessary to have access to some form of reference data. This measure could help traders, regulators, and researchers to get a grip on this important factor of today’s financial markets. I use three different proxies to show the effect of algorithmic trading, mainly its subset high-frequency trading (HFT) in order book data. I use these proxies to get a grip on actively trading algorithmic trade engines: • Limit Order Lifetime – measures the time between the insertion of a limit order and its deletion; • Order-Revision Time – measures the time between the deletion of a limit order and the next insertion of a limit order for the same stock; • Inter-Order Placement Time – measures the time between the insertion of two subsequent limit orders for the same stock. With these proxies, I can show the effects of high-speed algorithms on the order book structure. In addition, they serve as broad estimates for the current share of algorithmic trading. However, without a reference dataset with accurate shares of algorithmic trading on the NASDAQ, creating an outright measure of algorithmic trading is not possible. With such a measure, human traders could decide if they are willing to compete against
Chapter 0. Introduction
21
algorithms that process information flow thousands of times quicker than they do. This thesis is organised as follows. In Chapter 1, I give a survey of the literature on market microstructures – with a focus on limit order markets – and on algorithmic trading. As algorithmic trading itself, the research environment is vibrant. This makes it difficult to create a closed set of papers describing all characteristics of the topic. I hope to provide a comprehensive set of analyses that are most important for the empirical analyses of my thesis. Chapter 2 is the first empirical part and covers the ‘micro-effects’ of algorithmic trading. With a timestamp precision of a millisecond, I analyse the effects of algorithmic trading in the region of a few seconds. This approach takes the findings of Hasbrouck and Saar (2009) into consideration, who use a limit order lifetime of two seconds as a threshold to separate human traders from algorithmic engines. Thus, I calculate the limit order lifetimes of the 43 NASDAQ-listed stocks with the highest numbers of limit orders over the trading week of 9–13 October 2009. I estimate the density of limit order lifetimes and, following Prix et al. (2007, 2008), look for irregularities, such as, for example, peaks in the distribution. This analysis covers limit order lifetimes, order-revision times, and inter-order placement times from zero to two seconds. In Chapter 3 I consider data from the week of 22–26 February 2010. In this chapter, I analyse the ‘nano-effects’ of algorithmic trading. The timestamp precision increases to a nanosecond (the maximum on the NASDAQ). Thus, I ‘zoom in’ and analyse the proxies at values of a few milliand microseconds. Also, I compare NASDAQ-listed stocks with exchangetraded funds (ETFs). Intuitively, algorithms should be more capable to trade ETFs than common stocks. Because ETFs are structured and diversified products, their pricing is easier in contrast to common stocks. Because the structure of ETFs removes a good part of the idiosyncratic risk, this major factor of uncertainty is lower for ETFs than for common stocks. Therefore, the risk of placing a limit order with non-optimal properties should
22 be lower for ETFs than for common stocks. Thus, the effect on the proxies should be significant, and I should be able to measure a difference between the limit order lifetime, order-revision time, and inter-order placement time densities. Finally, Chapter 4 concludes and summarises the results. Each chapter has an individual conclusion, the conclusion in Chapter 4 acts as a global review for the thesis.
23
Chapter 1
Literature Overview Abstract This chapter gives an outline of literature on limit order theory and algorithmic or high-frequency trading. In this chapter, I provide fundamentals of market microstructure; after a brief introduction of quote-driven markets, a more extensive discussion of order-driven markets follows. Section 1.3 prepares for the two following chapters, giving an overview on the current state of research on algorithmic trading. Algorithmic trading is a new but powerful phenomenon in financial markets; therefore, many researchers are starting to analyse it. As of now, there will undoubtedly be vast numbers of new papers on this topic, as it could be one of the most important developments to analyse on today’s financial markets.
24
1.1. Introduction
1.1 Introduction Over the last decade, the environment for the trading of financial assets— stocks, bonds, foreign exchange, etc.—has changed dramatically. Especially in the United States, stock exchanges have for a long time relied (and partly still rely) on the traditional quote-driven system with market makers (or specialists) to provide liquidity. Market makers provide quotes consisting of a bid and an offer price for which the market maker would buy or sell an asset. In such a system, other market participants trade against the market maker who takes the role of the central counterparty. For his or her risk of trading against a better informed trader and perhaps not being able to maintain a neutral position (the so-called inventory risk), he or she is compensated by the bid/offer spread. With the advent of high-speed internet and communication technology in the second half of the 1990s, electronic communication networks (ECNs) emerged—with the Island ECN and Instinet as their biggest exponents. Their cores consist of servers that run matching algorithms that only accept priced limit orders. Every market participant can insert limit orders into the market, which generates the so-called order book. The bid/offer spread has now become an endogenous variable that every market participant could change. Within a few years, ECNs have gained significant market shares, mainly because of three features: transparency, anonymity, and speed (Hansch, 2003). These factors are not only desirable for human traders; but they also primarily benefit algorithmic trading engines. Because algorithms and especially high-frequency traders generate large numbers of orders, they are potentially an attractive source of revenue for stock exchanges. Consequently, established exchanges, such as NASDAQ and the New York Stock Exchange (NYSE), soon changed their business model from a pure quote-driven exchange to a hybrid form. With this changing of their microstructure, every market participant was able to compete with the (still existent) market makers. Especially specialised algorithms could henceforth be programmed and employed to analyse the or-
Chapter 1. Literature Overview
25
der flow in real time and trade accordingly. These computer programs have gained large shares in the market, usually generating better prices while increasing (decreasing) transaction costs of informed (uninformed) traders (Gerig and Michayluk, 2010). From a de-facto share of close to zero on total order flow some 15 years ago, algorithms today make up about 40 to 70 per cent of the order flow of the largest stock exchanges in the world. For example, Figure 1.1 shows an algorithmic trading time series on Deutsche B¨orse’s Xetra system. Market participants implementing algorithmic trade strategies could sign up for the exchange’s so-called Automated Trading Program (ATP). In exchange for rebates on order and execution fees, the signing market participant guaranteed that all orders coming from a specified source would be algorithmically generated. Thus, Deutsche B¨orse was able to measure the share of algorithmic trading quite accurately.
Figure 1.1: Share of order flow from a market participant that signed Deutsche B¨orse’s ATP on total order flow, Jan 2005 – Dec 2008 in per cent.
1.2. Market Microstructures
26
Figure 1.1 clearly shows the increase of the share of algorithmic trading on total order flow. The share of algorithmic trading increased from around 22 per cent in January 2005 to over 40 per cent in December 2008. The share of algorithmic trading is rather independent from the extent of order flow. For example, in October 2008, the order flow was a little more than EUR 520 billion and in November 2008 not half of that (EUR 235 billion). However, the share of algorithmic trading remained constant at around 43 per cent. Market observers and participants alike doubt that the growing share of algorithmic trading does not change the market microstructure. However, they do not agree whether the market environment has changed for the better or worse. In the United States, the change of the market structures towards a more order-driven market dominated by algorithms is comparable, even though no reliable data such as Deutsche B¨orse’s ATP exists. In 2010, algorithms had a share of around 65 to 70 per cent of order flow (Brogaard, 2010). In the wake of these fundamental market changes, the amount of scientific research on the topic is increasing rapidly. In this literature review, I summarise and discuss the papers that are most important for the subsequent empirical analysis. I will present some fundamental papers on the important topics: in Section 1.2, I will introduce the reader to market microstructure, with a strong focus on limit order markets. For the sake of completeness, I also give a summary of some fundamental works on the quote-driven market. Section 1.3 covers the current state of research on algorithmic and high-frequency trading.
1.2
Market Microstructures
Limit order theory has not been a big issue in financial research before the markets gave themselves order-driven microstructures. However, as many stock exchanges around the world transformed into pure order-driven markets, the main US exchanges have remained hybrid, i.e., a combination of a
Chapter 1. Literature Overview
27
quote-driven and an order-driven market. However, today these exchanges are often treated as de-facto order-driven markets (e.g., Inoue, 2006, p. 69). There is a vast amount of research on well-established theoretical and empirical market microstructure. O’Hara (2003, p. 1) defines market microstructure as ‘the study of the process and outcomes of exchanging assets under explicit trading rules.’ Two and a half main structures of markets exist: (1) quote-driven markets and (2) order-driven markets. In addition to that (0.5, so to say), hybrid forms exist that combine the two market microstructures and let market makers compete with limit orders from other market participants to provide the benefits of both worlds. Traditional markets enable market participants to trade with the help of market makers (several synonyms exist; they are often called dealers, designated liquidity providers, specialists, and sometimes referred to as designated sponsors). Market makers sign a contract with the stock exchange to continually provide quotes, i.e., bid and offer prices, to give individual traders the possibility to trade with a security at any time during a trading day. Usually, he or she is contractually obliged to quote spreads only up to a maximum value of a few ticks. We will discuss how such a market works and its determinants in Section 1.2.1. The second major market form is a pure order-driven market. In such a market environment, no central market maker exists, and the exchange is built around a central matching and order book computer. In its simplest form, traders willing to buy (sell) a security place a limit bid (offer) and specify a price and a quantity. If the order cannot be executed, it is immediately noted in the limit order book and waits there for further action— execution, cancellation, deletion, or modification. On some markets, more types of orders exist; some of them will be mentioned in the course of a more detailed description of limit order markets in Section 1.2.2. The two largest US-American markets are officially hybrid: both the NYSE and the NASDAQ switched from the traditional market maker (or specialist, as NYSE calls them) environment to a mixture of a quote-driven
28
1.2. Market Microstructures
and an order-driven market. For the empirical chapters, we use data from the NASDAQ stock exchange, which generously provided us with a limit order book protocol and is therefore of the most interest for the empirical work following this literature review.
1.2.1 Quote-Driven Markets Until the late 1980s or early 1990s, quote-driven markets dominated the microstructures of stock exchanges around the world. Many stock exchanges provided a central market place with market makers, who had the contractual obligation to continuously place quotes consisting of a bid and an offer. Other market participants were only able to buy from or to sell to the market maker. For this service, the market maker kept the difference between the offer and bid prices, which had a contractual cap which was not allowed to be exceeded. A lot of research on quote-driven markets exists. However, because I focus on order-driven market microstructures, only a few theoretical ideas will be briefly presented. Some early fundamental papers on dealer markets are Demsetz (1968), Garman (1976), Amihud and Mendelson (1980), Copeland and Galai (1983), and Glosten and Milgrom (1985); of course, this list is not comprehensive and only provides some insights into this extensively researched topic. For a more comprehensive introduction to market microstructure, see Sewell (2007), who provides very condensed summaries of these and other papers on market microstructure. The transaction costs caused by the market maker pay for the ‘predictable immediacy of exchange in organized markets’ (Demsetz, 1968, p. 36). He analyses the microstructure of the NYSE, which consists of a specialist system. Even though each stock has only one specialist, other ways to trade and other factors keep the bid-ask spread low and close to the actual cost to the specialist. For example, already in 1968 market participants were able to place limit orders to stay in competition with the
Chapter 1. Literature Overview
29
specialist. His statistical analysis shows that the more transactions there are for a security, the lower the transaction costs become. Interestingly, this finding creates a rather direct link to my analysis of NASDAQ data some 40 years later. While the market microstructure changed from a market-maker-dominated to a de-facto order-driven market, the share of high-frequency algorithms increases with order activity. This seems to be related to the findings of Demsetz (1968), be it because algorithms provide liquidity or simply because fee rebates are more easily obtainable in highly active markets. Other than Demsetz (1968), who conducts empirical research on the nature of transaction costs in a specialist system, Garman (1976) models both a continuous, quote-driven market and an auction market. Until then, most research had been focusing on call markets, where trades were made simultaneously at specific times of the day. He is one of the first researchers to explicitly implement a Poisson process to model the asynchronous order flow in financial markets. In contrast to the market description of Demsetz (1968), the assumption for his model for a dealer market is that the dealer has an effective monopoly on a stock, and direct trades between dealers are not permitted, which rules out limit orders submitted by others too. He justifies this by assuming that dealers (and other market participants) can be treated as a statistical ensemble. Garman (1976, p. 267) states that his model for the market taken word for word is not exceedingly realistic. However, he concludes that because the strong results clearly indicate that to be able to conduct their business, dealers have to make their pricing strategy a function of their inventories. If not, they will fail their contractual obligations to constantly submit quotes. Amihud and Mendelson (1980) take Garman’s model and derive the optimal pricing routine for market makers. They show that bid-ask prices are monotonously decreasing functions of the inventory of the market maker. The more shares of stock there are in the inventory of the market maker, the more eager he or she becomes to sell them and lowers his or her quotes. If the inventory is negative (i.e., he or she sold more stocks than he or she
30
1.2. Market Microstructures
owns and has effectively a short position), he or she will increase his or her bid and ask prices to buy back stocks in order to regain a neutral position and prevent attracting more orders to buy. Copeland and Galai (1983) analyse the effects of information effects on the bid-ask spread and base their work on Demsetz (1968). They argue that market makers choose their quotes to optimise their losses when they trade against investors with superior information and the profit they make from uninformed noise traders (compare Bagehot, 1971). They analyse the effects of different market factors on the bid-ask spread. In this analysis, they argue that market makers give informed traders a free straddle option consisting of a put option (with the bid price as the strike price) and a call option (with the ask as the strike price). These options are, of course, priced in-the-money from the perspective of the market maker and out-ofthe-money for the average, equally informed counterparty. Copeland and Galai (1983) deduce from their model that the bid-ask spread is positively correlated with the volatility, lower volume, and a higher price level; it is a negative function of market activity and depth. Glosten and Milgrom (1985) also borrow the idea of different types of traders as determinants for the bid-ask spread quoted by a market maker. They argue that even in a market with perfect competition and zero transaction costs, the possibility of the appearance of informed traders forces market makers to quote a positive non-zero bid-ask spread. They focus on informed traders as determinant of the bid-ask spread. Their model shows that it depends on the ‘the exogenous arrival patterns of informed and liquidity [i.e., uninformed] traders, the elasticity of supply and demand among liquidity traders, and the quality of information held by insiders’ (Glosten and Milgrom, 1985, pp. 98, 99). Their model also shows that quote-driven markets are not necessarily welfare-maximising. In the case that many informed traders arrive, it is possible that the dealer suspends placing quotes, which restrains all traders—not only informed ones—from trading. They conclude that there should be other microstructures that could be a Paretoimprovement.
Chapter 1. Literature Overview
31
1.2.2 Order-Driven Markets
With the already-mentioned change of paradigms of the market, economic scientists turned to research limit orders in comparison to market orders. One of the first authors on limit order markets is Glosten (1994), who analyses not just limit orders themselves but rather order-driven markets. The purpose of his paper is to compare the effectiveness of electronic limit order markets in comparison with traditional quote-driven markets. He states that this market microstructure works because of the existence of a large number of ‘patient’ limit order traders who supply liquidity to the market (Glosten, 1994, p. 1129). Each limit order reflects the trader’s marginal valuation of the asset. He concludes that electronic limit order markets will dominate market maker markets if the requirements of a large number of limit orders are met. He goes as far as to say that an ‘electronic market is competition-proof’ (Glosten, 1994, p. 1147). In addition to that, Glosten (1994, p. 1149) finds that in contrast to other exchanges, the electronic limit order book is immune to ‘cream skimming’ strategies. This proposition comes from the increased transparency that the open limit order book offers; if the possibility existed, it would be competed away instantly. Glosten (1994, p. 1151) makes two important observations regarding the patience of market participants. One observation is that in the case of an informed trader, patience can be very costly, leading him or her to use a market order instead. Second, even if the trader is not informed, keeping a limit order alive for some time is costly, because it consumes monitoring expenditures. In addition to the aforementioned rather broad scope, Glosten (1994) makes a strict distinction between traders who place market orders that consume liquidity (and wish to trade immediately) and traders who place limit orders that patiently provide liquidity. Handa and Schwartz (1996) make an important extension to Glosten (1994). In the first part of their paper, they theoretically calculate the gains of an investor who can choose
1.2. Market Microstructures
32
between placing a market order with immediate execution and placing a limit order. They argue that limit orders face two kinds of risk. The first type of risk is the non-execution risk (Handa and Schwartz, 1996, p. 1836), which causes the trader to invest time into the consideration of further action—to forego trading, or to transform the limit order into a market order with a gain of zero. However, keeping the limit order alive means that the trader faces the second kind of risk associated with limit orders: the nonexecution risk. The limit order remains in the book and the order monitoring has to be continued, which also causes costs. There are two possible types of traders that can execute a limit order, which have different effects on the advantageousness of a limit order. (i) If the order is executed against an informed trader, the price change is permanent and therefore not advantageous for the limit order trader. (ii) If the order is executed against a liquidity trader (other authors call this type of trader a noise trader), the price impact is temporary, therefore, the limit order is beneficial. The profitability of a limit order depends therefore on the probability that liquidity traders appear on the market (Handa and Schwartz, 1996, p. 1836). If liquidity traders exist, their expected gain of a limit order that is either executed within the trader’s trading window or transformed into a market order at the end of the trading window can be written as (Handa and Schwartz, 1996, p. 1840) [ ∫ Plim ] i· (Plast − Plim )f (Plast |I) dPlast + [jF (P0 − Plim )] + −∞ ] [ ∫ ∞ (P0 − Plast )f (Plast |U ) dPlast . + j· Plim
It consists, therefore, of three parts. The first summand is the expected gain of a limit order that is executed against an informed trader, which is negative. i is the probability of the appearance of an informed trader, Plast is the last price of the security, Plim is the limit price of the trader, and
Chapter 1. Literature Overview
33
f (Plast |I) is the probability density function of the price Plast in the case of the appearance of an informed trader. The second part of the addition is the expected gain of a limit order that executes against an uninformed liquidity trader, which is positive. j = 1 − i is the probability of arrival of the liquidity trader, and F = P(Plast ≤ Plim |U ), where U stands for the arrival of an uninformed liquidity trader. The last term is the expected gain of the closing purchase. The expected gain of a transaction against an informed trader is negative. If a liquidity trader arrives, the gain is defined by the second summand and is positive. Thus, if execution at the end of the trading window is mandatory for the trader, the model predicts negative gains because the third summand’s expected value is also negative. The probability is high that the gains from trading against a liquidity trader (the second part of the summation) cannot outweigh both summands. However, if the trader can be patient and does not have to close the position at the end of the trading day, the third part of the summation becomes zero. This increases the probability that the trader ends up with a positive gain from the limit order. If j is large enough, the expected total gain is positive. This reasoning suggests that only patient traders make an order-driven market possible (Handa and Schwartz, 1996, p. 1841) They test this finding with actual 1988 Trades and Quotes (TAQ) data. Handa and Schwartz (1996) perform an empirical analysis of the profitability of a liquidity providing limit order trading strategy versus a pure liquidity demanding market-order strategy (Handa and Schwartz, 1996, pp. 1842 et seqq.). They analyse hypothetical orders of one share placed into the historical order book of the NYSE. They implement four limit order strategies with different distances of the inserted limit order from P0 , which is the price of the security at t = 0. They insert limit orders to buy with price tags that are 0.5, 1, 2, and 3 per cent below P0 . These limit orders compete against a market order to buy inserted at t = 0. In contrast to the model that predicted an under-performance of the limit order strategy, the artificial ‘limit order strategies perform as well or better than a market order strategy’ (Handa and Schwartz, 1996, p. 1850).
34
1.2. Market Microstructures
In the previous simulation, either limit orders to buy or limit orders to sell were implemented. In a next step, Handa and Schwartz (1996, p. 1855) assess the profitability of a network of limit orders around P0 . Limit orders are placed in a distance of 1, 2, . . . , or 5 per cent on both sides of P0 and are constantly updated when a limit order executes. The profitability depends on the amount of the gains generated by the arrival of uninformed traders and the non-execution costs. The results indicate that such a limit order trading strategy is profitable. Handa and Schwartz (1996, pp. 1859, 1860) show that the profitability depends largely on the limit order spread, which is a function of the gain (loss) the trader envisions when trading against a liquidity (informed) trader and the non-execution cost. The most important drivers of it are—not surprisingly—the gain per round trip and the differential per share at closure. The other two components of the limit order spread under consideration, the number of round trips and the share imbalance at closure, are not statistically significant at the 5%-level (Handa and Schwartz, 1996, p. 1860). Harris and Hasbrouck (1996) empirically analyse the performance of limit orders on the NYSE SuperDOT. They also measure the effectiveness of limit orders vs. market orders. Their paper is closely related to Handa and Schwartz (1996). Both papers share the idea that although market orders offer price certainty and at-once execution, the trader must pay the cost for immediacy. The difference between the two papers is that while Handa and Schwartz (1996) insert small, hypothetical limit orders into the historical order book, Harris and Hasbrouck (1996) analyse actual limit orders placed via the SuperDOT system. They employ two performance measures to evaluate limit order trading vs. market order trading. With an ex-ante measure, they measure the difference between the limit price and the best price on the same side of the order book. In the case of a deletion of the limit order, Harris and Hasbrouck (1996) assume the trader transforms the limit order into a market order, which is then compared with the quote at the time of insertion of the limit order. The main result is that in a market with lower tick sizes,
Chapter 1. Literature Overview
35
limit orders perform better than market orders, for the price of a higher variability (Harris and Hasbrouck, 1996, p. 231). Second, they analyse the ex-post performance of limit orders, i.e., they compare the quotes five minutes after the execution of a limit order with its execution price. This proxy clearly shows the adverse execution risk of limit orders, as the performance of limit orders within this performance test is rather low. They conclude that an off-floor limit order trader cannot compete against a regular dealer. Both performance measures have most likely changed radically since the early 1990s. With decimalisation and ever-faster computer technology, things have changed radically. Harris and Hasbrouck (1996, p. 217) state: ‘(...) due to delays in monitoring and processing market information, a limit order cannot be instantaneously revised.’ While this was obviously true then, today floor traders probably rather have an informational disadvantage to traders in front of their computers with high-speed internet connections. With decreasing limit order lifetimes, information dissipation happens within fractions of seconds. It is debatable if the information generated by the ‘feeling’ of floor trading counts as an advantage. However, in a more recent paper, Hollifield et al. (2006) analyse the gains from trade on limit order markets with data from the Vancouver Stock Exchange (VSE). They find that limit order markets are indeed a good design to facilitate trade in stocks. The gains of trade on the VSE are 90 per cent of the maximum possible gains (i.e., a market with perfect liquidity) and far exceed the gains from trade on a market with a monopolist liquidity provider (Hollifield et al., 2006, p. 2756). Harris and Hasbrouck (1996) make the assumption of limit orders that are turned into market orders immediately after a trader decides to delete them. In today’s markets, this assumption does not mirror the market participant’s behaviour. Active traders rather revise their limit orders or cancel them in order to obtain and process new information. Fong and Liu (2010) empirically analyse limit order revisions and cancellations to help understand monitoring costs, non-execution risk, and free option risk. They use data from the Australian Securities Exchange (ASX), which is a pure
36
1.2. Market Microstructures
order-driven market. Their dataset spans two whole years (2000–2001) and contains about 23 million order events (as a comparison, one week of NASDAQ’s 2010 order book protocol exceeds 100 million entries). They identify two kinds of revisions: revisions that increase price priority, i.e., old limit orders are deleted in favour of limit orders with a higher (lower) price tag in the case of bids (offers); price priority decreasing revisions occur to lower the free option risk, in other words, the adverse execution risk. Their dataset shows that limit order traders are not patient providers of liquidity, but monitor their limit orders very closely. This is shown by the fact that almost half of limit order revisions occur at or very close to the best bid/offer. In addition, they find that large orders expose the limit order trader to a greater nominal risk of non-execution and adverse execution than smaller orders. Their data shows that limit order revisions and cancellations are correlated with their proxies of non-execution, adverse execution, and monitoring cost. They look at the number of revisions and cancellations in 15-minute slices of the dataset, i.e., the information about the time that passes between the insertion until the revision or cancellation is not considered. Ranaldo (2004) analyses limit order traders’ behaviour in response to limit order book changes. Thus, the research topic is directly connected with the order revisions analysed by Fong and Liu (2010): the more aggressive a limit order, the more it needs monitoring to prevent adverse execution. In his analysis, Ranaldo (2004) takes actual transaction data from the purely order-driven Swiss Stock Exchange (SWX) that is comparable to the well-known TAQ data. He finds four main results: the aggressiveness of limit order traders increases with the thickness of the same side of the order book and when the thickness of the limit order book on the opposite side of the order book decreases. It also increases with a wider spread and an increased temporary volatility. In addition to this, he finds that the bid and offer sides of the order book behave differently in terms of limit order activity; because his dataset stems from a time when the mar-
Chapter 1. Literature Overview
37
kets were increasing, this finding would have to be analysed in different market environments. Hasbrouck and Saar (2001) discover a new kind of limit order behaviour as a by-product of their analysis on the effect of volatility on limit order trading. They use a late 1999 dataset of the Island ECN. They find that 27.7 per cent of the limit orders that are placed via their systems are deleted within two seconds or less. Hasbrouck and Saar (2001, p. 22) call these limit orders ‘fleeting’ and disregard them in their further studies. The data without fleeting limit orders shows that a higher volatility decreases the time to execution, increases the probability of execution, and decreases the share of limit orders relative to the total order flow (including market orders). Because of the novelty of the observation of fleeting orders, they dedicate a whole paper to this phenomenon (Hasbrouck and Saar, 2009). The assumption of ‘patient’ limit order traders who supply liquidity in the form of free options to trade (as used in, e.g., Glosten, 1994; Handa and Schwartz, 1996; Foucault et al., 2005; Berkman and Comerton-Forde, 2011) can only be upheld with reservation. With a dataset from October 2004, Hasbrouck and Saar (2009) show that the amount of fleeting orders (again with a cutoff limit order lifetime of two seconds) has increased by 32 per cent from 27.7 per cent in 1999 to 36.7 per cent of all limit orders in 2004. They see the reason for the large share of fleeting orders that are obviously not aimed at supplying liquidity for other market participants in an improved IT infrastructure, market fragmentation (and therefore hunt for best execution), a generally more active trading culture, and the search for latent liquidity that is not displayed in the book (Hasbrouck and Saar, 2009, pp. 27, 28). In addition to this, in co-operation with an improved IT infrastructure, sophisticated mathematical models with all kinds of influencing variables find their way into increasingly sophisticated financial markets. With these models, traders can compute their optimal limit order properties at any time. Because in many markets, the placement and deletion of limit orders are free for market participants, traders revise their limit orders
38
1.2. Market Microstructures
more frequently due to their increased awareness of the costs and risk that are connected with their limit orders. Especially interesting for the following empirical chapters of this thesis is Figure 1 of their paper. It shows the distributions of the times to cancellation of the NYSE in 1990, and the ones of Inet in 1999 and 2004. Limit order lifetimes show a trend to shorter expected lifetimes of limit orders. The share of limit orders that are cancelled within a few seconds has increased over the years. In our empirical work, the data show that this process continued and has even accelerated in the years from 2004 to 2010. Brusco and Gava (2006) analyse data from the 35 stocks constituting the Spanish stock index IBEX. The period they consider is July–September 2000. Their main aim is limit order cancellations and the differences and similarities of market orders in general. Inspired by Hasbrouck and Saar (2001), Brusco and Gava (2006) distinguish ‘serious’ market orders from ‘fleeting’ market orders. They call those limit orders ‘serious’ that stay in the book long enough to not only result in information generation but a trade, in contrast to ‘fleeting’ orders that are only placed in order to generate information about the market conditions and not in order to be executed (Brusco and Gava, 2006, pp. 15, 16). The results of the analysis on fleeting orders should be taken with a pinch of salt. In order to ‘have a sufficient number of fleeting orders’ (Brusco and Gava, 2006, p. 17), limit orders are considered to be fleeting when they have up to a lifetime of ten seconds, in contrast to Hasbrouck and Saar (2001), who define two seconds as their cut-off for the lifetime. The argument of Hasbrouck and Saar (2001) that a limit order that is inserted and removed within two seconds is unlikely to be of human origin but rather algorithmically produced does not hold for limit orders that are removed within ten seconds. Within that time, a trader can easily remove a limit order. The term ‘fleeting’ is therefore a little misleading. At least from today’s perspective, a limit order that stays in the order book for a few seconds cannot be considered as being ‘removed almost immediately’ (Brusco and Gava, 2006, p. 20).
Chapter 1. Literature Overview
39
However, the properties of fleeting orders that they extract from their data appear reasonable for either definition of fleeting orders. They find that the proportion of fleeting orders increases with a higher spread, volatility, and trading activity, and if the most recent order was a market order (Brusco and Gava, 2006, p. 29). Even though the definition of a fleeting order differs greatly from the definition of Hasbrouck and Saar (2001), which makes a direct comparison of fleeting orders across different markets difficult, the results can be an indication of the properties of fleeting orders on other markets. Danielsson and Payne (2002) find another influencing factor on the lifetime of limit orders. While his main research question is liquidity determination, he analyses limit order lifetimes and finds that the higher their priority position is, the longer limit orders live (Danielsson and Payne, 2002, pp. 11, 12). This is not surprising, because Hasbrouck and Saar (2009) find that most fleeting orders are used to fish for latent or invisible liquidity. The further the limit order is away from the actual best bid/offer, the less fleeting it becomes, as e.g. Brusco and Gava (2006) indicatively find for the Spanish stock exchange. Apparently, the more probable it is that a limit order executes, the shorter becomes its lifetime. Prix et al. (2007) analyse the lifetimes of limit orders on Deutsche B¨orse’s Xetra system for patterns. They find an increased number of limit orders that are deleted after distinctive, ‘special’ times after their insertion. These peaks in the densities appear at 1, 2, 30, and multiples of 60 seconds. In a later paper (Prix et al., 2008), they additionally find peaks at multiples at multiples of 250 milliseconds (or a quarter of a second). They show that at least a part of this systematic limit order trade strategy is part of a so-called constant initial cushion (CIC) strategy. Limit orders are placed a few ticks away from the current best bid and offer (BBO) on both sides of the market. In the case of a large incoming order, the limit order executes at a better price than the BBO. In the case that the cushion changes, or reaches its maximum time in effect, the limit order is deleted and replaced by a new one with adjusted properties (the constant cushion).
40
1.3. Algorithmic Trading
Basically, this is the empirical proof that the trading strategy described in Handa and Schwartz (1996) is indeed implemented in the market. It is very likely that this trading strategy is performed by algorithms. While the aforementioned papers analyse limit order lifetimes and thus the conscious actions of traders, Ni et al. (2010) analyse deletions of limit orders, especially the inter-cancellation durations. They conduct their research in order to help researchers model cancellation processes more accurately. They do not explain economically why they expect to find dependencies and regularities in the inter-cancellation duration process. However, while models generally assume that cancellations can be modelled with a Poisson process, Ni et al. (2010) find that the inter-cancellation durations should be modelled with the Weibull distribution (Ni et al., 2010, p. 5).
1.3 Algorithmic Trading Although its number is increasing, literature on algorithmic trading is still relatively scarce. This is because only in recent years has the share of algorithmic trading reached significant levels, along with the increased speed of internet and communication technology. According to Pole (2007, p. 1), it all started with the statistical arbitrage strategy known as ‘pairs trading’ in 1985. Although pairs trading generated huge profits by applying a simple strategy, algorithmic trading stayed under the radar of researchers until well into the first decade of the new millennium. In the last years, algorithmic trading triggered the interest of many researchers and the number of scientific papers on this topic has exploded (see, for example, the introduction of Hendershott et al. (2011) for a general description of algorithmic trading). Before reviewing the literature on algorithmic trading, however, it is necessary to distinguish between the different kinds of algorithmic trading. Algorithmic trading can be defined as ‘the use of computer algorithms to
Chapter 1. Literature Overview
41
automatically make trading decisions, submit orders, and manage those orders after submission’ (Hendershott and Riordan, 2009, p. 2). Figure 1.2 illustrates the broad definition given in Hasbrouck and Saar (2011, pp. 13–15). Algorithmic Trading
Proprietary Algorithms
Agency Algorithms
Electronic Market Makers
Statistical Arbitrage
Figure 1.2: Broad forms of algorithmic trading, after Hasbrouck and Saar (2011) Hasbrouck and Saar (2011) distinguish agency algorithms and proprietary algorithms. Each of the boxes in Figure 1.2 represents a wide range of algorithms and algorithmic strategies. I describe the boxes only very broadly in order to provide a general understanding of algorithmic trading in today’s markets. The task of agency algorithms is to execute a position change that has been decided either by a human or another algorithm with an investment focus rather than a trading focus. The algorithm then executes the order under the condition to meet some benchmark defined by the trader (e.g., the volume weighted average price (VWAP), the time-weighted average price (TWAP), etc.). Proprietary algorithms are split into electronic market makers and statistical arbitrage. Market makers provide liquidity by quoting bids and offers and try to maintain a low inventory risk. This process can be hugely simplified by implementing algorithms that react in real time to market
42
1.3. Algorithmic Trading
changes, news, etc. The second form of proprietary algorithms is the most clandestine one. By analysing historical data, programmers extract potentially profitable strategies and implement them in algorithms that—after intense back-testing—trade independently in the real market. There exists a broad range of algorithmic strategies and this node of the figure could be split up into dozens of sub-nodes. Mainly, the strategies can be split up into arbitrage, the exploitation of structural vulnerabilities of the market or market participants, and directional trading, where trade algorithms go long (short) if it deems a security undervalued (overvalued) (Securities and Exchange Commission, 2010). See, for example, Brogaard (2010, pp. 67, 68) or Muthuswamy et al. (2011) for overviews on high-frequency trading, a subset of statistical arbitrage. Muthuswamy et al. (2011) describe the basic ideas of three actual strategies, flash orders, spoofing, and black box trading. It is not easy to measure the extent of algorithmic trading in financial markets with publicly or other easily available data. In most markets, traders do not have to disclose whether an order is submitted by a human trader or an electronic substitute. For the US-American market, there are only estimations on the share of algorithmic trading relative to overall trade activity and are therefore not easy to validate. Therefore, the opinions on the share of algorithmic trading on total trade activity vary. CFTC and SEC measure a share of around 50 per cent according to their Financial Industry Regulatory Agency (FINRA) data (CFTC and SEC, 2010, ch. II.2.d.), Brogaard (2010) calculates a share of 68.5 per cent from his NASDAQ-dataset. For the German market, the data availability is better for some researchers; for example, Hendershott and Riordan (2009), Gsell (2009), Groth (2009), and Maurer and Sch¨afer (2011) use tick data from the Deutsche B¨orse with a so-called ATP-flag. Until November 2009, their ‘automated trading program’ (ATP) offered lower fees for traders who trade via algorithms (see Maurer and Sch¨afer (2011) or Hendershott and Riordan (2009) for more detail on ATP). Deutsche B¨orse provided some researchers with a dataset which contained ATP flags attached to orders that came from par-
Chapter 1. Literature Overview
43
ticipants who had signed up for ATP. It is assumed that most algorithmic trading firms signed up for ATP because of the price incentives. I was provided with a dataset containing the monthly share of algorithmic trading on Xetra spanning four years from January 2005 to December 2008. In this time, algorithmic trading increased its share on total order flow from 23 per cent to over 40 per cent (see figure 1.1). ATP flags enable researchers to accurately measure the effect of external effects on algorithmic trading as well as the effect of algorithmic trading on other variables such as liquidity, volatility, depth of the order book, and so on. Gsell (2009) (in a similar paper, Gsell and Gomber (2009) find comparable results) analyses the algorithmic activity on Xetra. He finds that algorithmic traders are responsible for the majority of order book events as well as for the majority of trades. Specifically, in his October 2007 dataset, automated traders were responsible for 51.9% of overall order activity. Their pricing is, on average, more aggressive (i.e., more preferable for the counterparty) than that of human traders; however, the size of the orders is smaller than orders that are inserted by non-ATP traders. The reason for the small-trades policy could be the limitation of risk exposure. If the model underlying the algorithm fails to calculate the appropriate trading strategy for a market condition, the failure costs a proportionately lower amount of money. In other words, traders reduce the lifetime of limit orders and, as a result, the risk that a possible ill-priced order executes. Hendershott and Riordan (2009) analyse a similar dataset with ATP flags from Deutsche B¨orse. They analyse the aggregate behaviour of algorithms and specifically analyse their effect on price discovery. They use a January 2008 dataset. In line with Gsell (2009), they find that the share of algorithmic trading on overall trade activity declines with an increasing order-size quantile. Specifically, in their two smallest categories (1–499 shares, and 500–999 shares), the share of algorithmic trading is 68 per cent
44
1.3. Algorithmic Trading
and 57 per cent, respectively, whereas the overall share is 52 per cent of the Euro volume. In contrast, the share is only 23 per cent for their largest category, 10,000+ shares (Hendershott and Riordan, 2009, p.10). However, this does not mean that algorithmic trading concentrates on small blocks. Frequently, large positions of shares are managed by algorithms that ‘slice and dice’ them to minimise market impact, adding to Hendershott and Riordan’s smaller categories. According to their data, algorithms provide liquidity when spreads are wide and consume liquidity when spreads are narrow. In addition, Hendershott and Riordan (2009) find that algorithmic trading does in fact contribute more to the price discovery than human traders (or messages without an ATP flag). Interestingly, they do not find an effect of algorithmic trading on volatility, a factor that many fear to increase with the advent of algorithmic traders. Groth (2009) analyses the lifetime of limit orders with an ATP flag and compares them with non-ATP limit orders. In addition, he challenges traditional liquidity measures that rely on traditional limit order traders as patient liquidity suppliers. He finds that limit orders inserted by automated traders are indeed significantly different from limit orders by human traders: on average, they are smaller and deleted more quickly (Groth, 2009, pp. 215, 216). The liquidity measures under consideration reflect the amount of limit orders in the book as well as the trading activity. He argues that they are ‘blurred’ by algorithmic trading as they assume patient limit order traders, which trading automatons usually are not. Without a flagged dataset, researchers have to find proxies for algorithmic trade activity. In an anonymous order book, one cannot distinguish traders of flesh and blood from their algorithmic cousins. One of the first papers to analyse the effect of algorithmic trading on the market is Hendershott et al. (2011). They use electronic message traffic to answer the question whether algorithmic trading has improved liquidity and if it should be encouraged. Specifically, their proxy for algorithmic trading is ‘the negative of dollar volume in hundreds per electronic message’ (Hendershott et al., 2011, p. 18).
Chapter 1. Literature Overview
45
They use the automation of quote dissemination of the NYSE in 2003 (autoquote) as the basis for their analysis. This structural break increased the amount of algorithmic trading. Because autoquote was introduced step by step for different groups of stocks, Hendershott et al. (2011) are able to look for the causality of an increased amount of algorithmic trading and an improved liquidity. Measuring for liquidity, they use quoted, effective, and realised spreads, and price impact. They find that for stocks of companies with large market capitalisation, algorithmic trading does in fact increase liquidity and price informativeness. For the smallest quantiles of stock capitalisation, no statistically significant effect was measurable. An interesting point mentioned by Hendershott et al. (2011) is that narrower spreads mean lower costs for the liquidity demanders but at the same time lower income for liquidity suppliers. The more algorithmic trading there is in the market, the narrower the spreads become. In the end, the bid-offer spreads are too small to cover the expenses of traditional market makers, forcing them to implement algorithms themselves to cut costs. Consequently, the share of algorithmic trading could be self-energising. In addition to this, Hendershott et al. (2011, p. 30) say that human traders do not always monitor the market, leaving their limit orders at least partially stale. Because algorithms always monitor their limit orders and replace them with better ones, the efficient price of a stock is transported in the quote rather than in the trades. To summarise, Hendershott et al. (2011) find that algorithmic trading makes trading with stocks cheaper and increases the information content of quotes. For the first time after the introduction of autoquote, the realised spread increased, indicating that the competition between algorithmic liquidity suppliers was less stiff than the competition between human liquidity suppliers. Because algorithms take a lot of time to be developed and tested, this is a plausible explanation for the lower realised spread. Riordan and Storkenmaier (2011) find comparable results for the German market. They analyse the major update of Deutsche B¨orse’s Xetra
46
1.3. Algorithmic Trading
system to version 8.0, which reduced the system latency from 50 milliseconds to 10 milliseconds. This decrease did not help human traders, but has had dramatic effects on the way algorithms can react to market changes. In line with Hendershott et al. (2011), they find a higher realised spread after the transition, indicating that not too many developers of algorithms could instantly implement systems that could fully benefit from the new environment. However, their central finding is the improvement of the efficiency of prices associated with the decrease of latency and an increase in algorithmic trading. Brogaard (2010) refers to a similar question. He analyses a dataset of actual trading data of 120 stocks and 26 major HFT firms that operate on NASDAQ. In his dataset, HFT generated 68.5 per cent of the overall dollar volume of trades and 73.8 per cent of the total number of trades. In line with conventional wisdom, his empirical analysis shows a higher fraction of HFT in large-cap stocks than in small- and medium-cap stocks (Brogaard, 2010, pp. 13, 47). An important finding is that HFT trading strategies—mostly price-reversal strategies—are more correlated than nonHFT trading activity. Apart from that, he concludes that although HFT often occurs inside the bid-offer spread, the depth they contribute to the market is rather small. However, due to the large number of limit orders inserted into the market, the accumulated contribution to the market lowers volatility and improves market quality. Brogaard (2010) concludes, therefore, that HFT should be encouraged, as do Hendershott et al. (2011) and Riordan and Storkenmaier (2011). In addition to the general findings on the benefit of HFT on overall market quality, Brogaard (2010, ch. 5.3) finds that the average profitability of HFT firms that are contained in the dataset is 0.000072 dollars per dollar traded. This figure is the average of all HFT firms and all strategies that they perform, so it is of course likely that there are strategies that are more profitable per trade than others. His Figure 2 shows the daily profits from January 2008 to January 2010 in absolute terms. These range from a loss of around a million dollars to a profit of over three million dollars, whereas
Chapter 1. Literature Overview
47
it usually moves in the range of zero to half a million dollars. From visual evidence, profits are broadly spoken generally higher in times of increased volatility. This reasoning would be in line with the finding that in times of increased volatility, HFT tend to supply more liquidity than usual. With a larger market share, both the average profit and the number of trades increase, possibly explaining the peaks in the profit distribution. Only a little research has been performed on algorithmic trading beyond empirical, number-crunching methods (I am no exception to that). One of the few is Gsell (2008). His basic market setup follows Chiarella and Iori (2002), who include a noise trader, a momentum trader, and a fundamentalist trader. In addition to that, Gsell (2008) implements two algorithmic traders with strategies whose more sophisticated cousins are believed to be very common on actual markets. One has the order to work off a larger position, either buying or selling via market orders in small batches linearly over time to minimise market impact. The second stylised algorithm also handles a large position; in contrast to the other algorithmic strategy, the aggressiveness varies according to the popular VWAP. The results from this simulation approach indicate that a lower latency (and thus better trading environments for algorithms) leads to a lower volatility of the market. Surprisingly, there are only a very few concerned voices regarding algorithmic and high-frequency trading on financial markets. As already mentioned, the majority of research papers appear to find positive effects. Most critiques come from traders and commentators. See, for example, Arnuk and Saluzzi (2008, 2009), Saluzzi (2009), or Nanex (2010). The popular blog Zerohedge.com has regularly criticised algorithmic and highfrequency trading. Cvitani´c and Kirilenko (2010) model a market with human traders and add an automated (i.e., algorithmic) trader. They assume the algorithmic trader to be strategic but uninformed and must rely on the order flow and the accompanying information that is provided by the insertion of a limit order. In a way, the algorithmic traders in the model of Cvitani´c and Kir-
48
1.3. Algorithmic Trading
ilenko (2010) reverse engineer human order-placement activity. They know the distribution and the process by which humans place orders. The algorithms try to benefit from human orders by front-running them: as soon as the algorithm identifies a human order, it places its own order at a limit price of one tick better than the human one. If the order does not execute, it deletes the limit order and resubmits a new one. Of course, this improves temporarily but repeatedly the best bid or offer. The model yields a market with more distributional mass of the transaction prices around the best bid/offer and thinner tails. This means that prices become better on average. However, the strategy of the algorithms in this model could be dangerous and self-destroying in the long term. By jumping ahead of the human traders with infinite speed, humans could lose interest in placing limit orders altogether. By endogenising the human order-placement behaviour by introducing a profit/loss function, it would not be surprising to see the informational content of prices dwindle. The result of Cvitani´c and Kirilenko (2010) supports the finding of Gerig and Michayluk (2010): if transaction costs increase for the informed traders, they will obviously contribute less orders to the market, possibly decreasing the informational accuracy of the stock price. Potentially negative externalities appear in Biais et al. (2010), a working paper which they mark as preliminary and incomplete. However, their results are noteworthy, because they are among the first researchers to find possible negative externalities of algorithmic trading. They model a market in which algorithmic traders compete with human traders. Biais et al. (2010) argue that algorithms (fast traders) can process information much quicker than human (slow) traders, which generates adverse selection costs for humans. In addition, they argue that humans can adapt their trading behaviour more quickly to changes in the market environment than algorithmic traders, e.g., to a suddenly illiquid market. Because the codes and parameters of algorithms cannot be changed quickly, algorithms trying to apply their strategies basing on liquid markets trade too aggressively in
Chapter 1. Literature Overview
49
illiquid markets, possibly causing large price changes. Their model shows that there exist multiple equilibria for the extent of algorithmic trading, while some of them are economically desirable and others are not. Often, it is undesirable to have a large share of algorithmic trading: while it increases the probability to find a counterparty to trade with, it also increases the risk that slow traders (and some inferior algorithmic traders) leave the market, which decreases market quality. This is an important finding and resembles that of Cvitani´c and Kirilenko (2010). While other papers empirically use a dataset that was provided by a stock exchange (and use measures developed in the 1980s), second-round effects of algorithmic trading are usually ignored or simply invisible. Critics doubt if the market still offers a level playing field for all market participants. It is a widespread opinion that trading on stock exchanges is becoming increasingly less transparent. Muthuswamy et al. (2011, p. 91) exemplarily describes a HFT strategy called ‘spoofing’: a trader inserts limit orders into the market in order to let others have the impression of actually non-existent order book imbalances. Of course, this limit order is not intended to be executed. Another widely discussed strategy is ‘quote stuffing’: HFT traders flood the market with thousands of non-marketable limit orders in a very short timeframe. This causes either the opponents’ computers or the exchange’s computers to analyse the large number of new orders, slowing them down a little bit. HFT firms use their advanced computer and network technology in order to increase the exchange’s or their competing HFT firms’ system latency and perform latency arbitrage. I name these two rather roguish strategies to give a broad idea of what algorithmic trading means to some market observers. Literature on these fairly hands-on topics is still scarce. Some papers (e.g., see Brogaard, 2010) mention these topics but do not analyse them. Future research will have to revise the findings of those authors who conclude now that algorithmic trading improves market quality.
50
1.3. Algorithmic Trading
For example, it is possible that the improved liquidity that researchers measure is a liquidity that is usable for algorithms but not for human traders. Suppose an algorithm inserts a limit order to buy at a price that improves the current best bid and deletes it after, say, 30 milliseconds. The human trader does not nearly have the chance to hit the order, because his or her visual reaction time alone is around 250 milliseconds (Davis, 1957). The human trader could execute a fleeting order only by accident. Other algorithms should not have this problem, as their reaction times are much shorter. Thus, fleeting orders that disappear a few milliseconds after their insertion may improve the spread nominally, but they certainly do not improve the spread for human traders. The question is: should algorithms have to wait for humans? If you ask traders, the answer is a definite yes; if you ask researchers, the few papers regarding the problem tend to say no (or nothing at all). Researchers struggle to get a grip on algorithmic trading. It is rather difficult to obtain reliable data, because most stock exchanges do not have records on the origin of limit orders. If the stock exchange has an algorithmic flag (like the Deutsche B¨orse), there is only a little data available. In some rare cases (Brogaard, 2010, for example), it is possible to directly analyse HFT data from the US. The reason for the secrecy of HFT firms is easy to explain: their algorithms, however complex they may be, remain specific, but more or less static trading rules. This makes them vulnerable to other algorithms that can front-run them once they get hold of the core strategy. As G¨odel (1931) proves, in every axiomatic system an expression can be found that is neither true nor false. In more general terms, every formal system can be beaten as long as someone knows exactly the inner workings (for a generalised analogy with an explanation, see e.g., Hofstadter (1989)’s Chapter III). This reasoning can broadly be adopted to conclude that there is reason to not publicise the inner workings of an algorithm and even implement strategies to prevent reverse engineering.
Chapter 1. Literature Overview
51
1.4 Conclusion In this literature review, I provide short summaries and discussions of papers that cover the most important topics of the two empirical papers that follow in the course of this dissertation. The main topics cover market microstructure and the state of research on algorithmic trading. Market microstructure’s two main pillars are order-driven and quote-driven markets. I focus on order-driven markets and limit orders, because in this microstructure algorithms can trade at the speed they want and not at the speed a market maker wants; also, the major stock exchanges are pure or de-facto order-driven markets. The second part of this review presents major papers on algorithmic- and high-frequency trading. Because algorithms have only had a significant market share for a decade now, scientific research on this topic is still not very abundant, but the number of papers increases rapidly, as more data become available. Algorithmic trading continues to attract a lot of attention from market participants, regulators, and researchers. While the impact of algorithmic trading is generally seen positively by scientific papers, many human traders blame algorithms for newly emerging phenomena such as, for example, flash crashes. Consequently, the FINRA recently subpoenaed highfrequency trading firms to reveal their trading strategies and, in some cases, even their parameterisations (Financial Times, ‘FINRA to ask for strategies of HFT groups’, 2 September 2011). These contradictory signals from researchers, practitioners, and regulators show that there is still much to do in order to truly understand the impact of algorithmic trading on market quality. In order to analyse the effect of algorithmic trading, the best would be to be able to identify the origin of orders and to be able to tell human and algorithmic orders apart. However, most stock exchanges only log order flow, of which the result is an order book where algorithmic and human orders cannot be told apart from each other. In addition to that, the programmers of algorithmic trade engines are very keen to keep the functionality
52
1.4. Conclusion
of their algorithms secret. A few researchers have data with special flags for algorithmic orders. However, because the amount of data is vast (they often contain tens of millions of orders), the analytical tools come from the 1980s and the stock exchanges are rather restrictive about providing the data, a comprehensive analysis of algorithmic trading in different market conditions has not been performed yet. The process of finding the correct means for measuring and analysing algorithmic trade activity is anything but finished. Some researchers who have data that contain indicators for algorithmic orders find positive effects of algorithmic trading on price formation, liquidity, and other market factors. However, these datasets usually cover only a few weeks at best, and it is usually not the case that the data cover calm and distressed markets alike. For most researchers and market participants, however, algorithmic trading remains a black box; the only thing they know is that this box is rather big. Without any special dataset, it is currently not possible to estimate algorithmic trading and, consequently, it is currently not possible to analyse the effect it has for the market environment that everyone is trading in. Because nobody but the programmers knows how the algorithms trade and react on shocks, algorithms seem to many like an invisible sword of Damocles hanging above their heads. Therefore, it is desirable to develop tools to obtain a rough estimation of the activity of algorithmic trade engines with data that are available for a broader audience. Order book data seems most likely to fit the shoe. Most professional traders have access to the order book or at least to some number of levels of the order book. It would be interesting to know what can be learnt about algorithmic trading from high-frequency but anonymous order book data. With an order book protocol from the NASDAQ which carries highprecision timestamps, we try to trace the activity of algorithmic trading. We use proxies that capture active trading strategies and that can take forms that are unlikely or outright impossible to be of human origin.
53
Chapter 2
The Microstructure of Order Dynamics Abstract Stock exchanges around the world have either changed or are in the process of changing into order-driven markets. This opens the door for trade algorithms, which already account for a large part of trading on the major stock exchanges. We find evidence of algorithmic trading in precise, but anonymous order book data, which many market participants can access. We analyse the order book data in three ways: We look for patterns in the order lifetime, the inter-order placement times, and the order-revision times. We find the order lifetime and inter-order placement time can be modelled with a Weibull distribution, and that all three proxies show peaks at ‘special’ times. These peaks are caused by algorithmic trade activity. Also, the limit order lifetime decreases with more limit order insertion activity, which suggests that high-frequency trade algorithms try to hide in busy stocks.
54
2.1. Introduction
2.1 Introduction Algorithmic trading is occupying an important position in financial markets. The increasing sophistication and accuracy of mathematical market models allows the creation of computer programs that can read and analyse various kinds of market data to calculate trading opportunities and strategies. The behaviour of electronic traders differs from that of human ones. With a dataset from late 1999 and 2004, respectively, Hasbrouck and Saar (2001, 2009) find that a significant share of limit orders are deleted within two seconds. They conclude that it is unlikely that human traders generate these orders. Rather, it is algorithms that are not bounded by significant reaction time barriers. In this work, we analyse order dynamics of these short-lived limit orders. If the lifetime of these limit orders is so short that it is unlikely that they come from human traders, it is possible to find some algorithmic trading patterns, for example peaks in the density that Prix et al. (2007, 2008) find for the Deutsche B¨orse’s Xetra market. The purpose of this chapter is to analyse order dynamics at levels that are for a good part perceivable by humans. Therefore, to balance relevancy and accuracy, the precision level of timestamps in our data is one millisecond (10−3 s), whereas NASDAQ’s clock runs in nanoseconds (10−9 s). Information technology has made great progress over the last years, so we expect much larger shares of short-lived limit orders than Hasbrouck and Saar (2001, 2009). For some stocks, hundreds of thousands of limit orders are inserted per day, and many of them only live fractions of seconds. Therefore, we require a high degree of accuracy to see what happens in the sub-second range. In this chapter, we analyse order book activity in order to provide evidence of algorithmic trade activity on the NASDAQ market. To analyse the activity structure of trade algorithms, we extract the structure of limit order lifetimes, inter-order placement times, and order-revision times. We
Chapter 2. Order Microstructure
55
choose a timeframe of zero to two seconds for the analysis in order to find out how algorithms trade in an environment where there is still the possibility that humans are part of the equation. In Chapter 3, we ‘zoom in’ and analyse the proxies’ structures at the much shorter timeframes of zero to two milliseconds. Algorithmic trading is in discussion among ‘traditional’ human traders, regulators, and researchers because its impact on the market microstructure remains unclear. It is difficult to discuss algorithmic trading accurately, because it is not possible to see if a limit order was placed by a human trader or by an algorithm because for the public, the limit order book is anonymous. This makes measuring algorithmic trading and consequently detailed research on the topic difficult. Using the limit order lifetime, limit order-revision time, and inter-order placement time, we analyse the order book activity of all traders that place limit orders on the NASDAQ market. We hope to help find a way to calculate the approximate share of algorithmic trading on total trade volume in real time. At the moment, not much more but estimations exist. For example, the Tabb Group estimates the share of algorithmic trading on American markets at around 60 per cent, so it should be possible to find evidence of algorithmic trading in the data. Since the dataset does not contain any identifier of the acting party, we rely on researching the densities of the proxies and try to find patterns in the histograms of the limit order lifetime, limit order-revision time, and inter-order placement time. With these three proxies, we show that it is at least to some extent possible to detect algorithmic activity on stock markets. We hope to help find a means to roughly estimate the share of algorithmic trading on stock markets. This can be interesting for researchers, regulators, and especially traders who do not want to be outpaced by algorithms designed to front-run them. Without a doubt, it is possible that algorithms can help enhance the price-formation process. However, it is debatable if it is then necessary to implement trading strategies that rely on limit orders that are deleted after a few milliseconds’ time or even much
56
2.2. Previous Work
less. With such short lifetimes, all other traders see an order book that is already history when it is displayed on their trading screens. This chapter is structured as follows: Section 2.2 provides a summary of previous work. Section 2.3 gives a descriptive analysis of the order book structure on the basis of three modes: first, we measure the time that passes between the placement of a limit order and its deletion (the order lifetime); second, we measure the time that passes between the placement of one limit order and the next one (the inter-order placement time); the last mode measures the time between a deletion of a limit order until the next deletion of a limit order (the order-revision time). Section 2.4 concludes.
2.2 Previous Work In their fundamental study on limit order trading, Handa and Schwartz (1996, p. 1835) suggest that limit orders carry two types of risk: First, it is unlikely that a price will decrease (increase) exactly to the price specified in the limit order to buy (sell) and to increase (decrease) again. Probably, the price of the stock will decrease (increase) further, leaving the limit order executed and the investor with a marked-to-market loss. The second risk is that a limit order will not be executed and causes monitoring costs. In their model, Handa and Schwartz (1996, p. 1837) assume two kinds of traders, namely ‘patient traders [who] supply liquidity to the market, and other traders, who wish to trade immediately’. Markets have changed radically since then. Limit order lifetimes are diminishing rapidly, as a comparison between the empirical results of this study and older ones indicates (see Section 2.3). With limit orders being deleted without execution only a few milliseconds (or fractions of a millisecond; see Chapter 3) after their insertion, one cannot speak of limit order traders who act as ‘patient’ traders who supply liquidity in its classical sense – where limit orders are a free option for others to execute. Before we turn to the empirical anal-
Chapter 2. Order Microstructure
57
ysis and discuss the findings of our empirical analysis, we provide a short overview of prior research. There exists a great deal of research on limit orders and limit order markets. In this review, only a small fraction of the existing limit order research papers are described (for example, Foucault et al. (2005) give an excellent overview of limit orders). Because Hasbrouck and Saar (2001) have discovered that a large part of submitted limit orders on the Island ECN are cancelled rather rapidly, these so-called ‘fleeting orders’ have come to the attention of the financial research community. What they termed fleeting orders and ‘large number’ of orders (Hasbrouck and Saar, 2001, p. 22) are today not at all uncommon. Using data from the last quarter of 1999, they find that 27.7 per cent of all visible limit orders are fleeting, which signifies that the time between the insertion and the deletion of the corresponding orders is equal to or shorter than two seconds. Since then, limit orders have become much more fleeting, with many orders being deleted after only a split second. Hasbrouck and Saar (2001) state that the general assumption of patient liquidity providers placing limit orders and then waiting for the market to execute them or move away does not hold. In fact, a growing proportion of market participants who place limit orders seem to use plain vanilla limit orders as ‘fill-or-kill’ orders as soon as they discover that the market does not either hit or lift them. Our data show that many limit orders only have a lifetime of under five milliseconds; I explain and present their empirical structure in Section 2.3. However, Hasbrouck and Saar (2001) focus instead on limit orders and volatility; their discovery of fleeting orders are just a by-product of their research topic in that paper. In a second paper, Hasbrouck and Saar (2009) concentrate on these newly discovered fleeting orders. They explicitly examine the phenomenon with data from 2002. Because of faster information and communication technology and tick-size reduction (decimalisation), the limit order struc-
58
2.2. Previous Work
tures have changed drastically. In their dataset of 100 stocks, the estimate of the cumulative probability of deletion within two seconds is 0.369. They provide three hypotheses for the existence of fleeting orders: (i) traders performing the ‘chasing hypothesis’ try to keep up with the market by adjusting the price tags of their limit orders as the market moves. (ii) The second hypothesis states that traders replace their limit order by a marketable limit order (the equivalent of a market order on the NASDAQ market). (iii) The third hypothesis, which could be the most probable explanation for the existence of fleeting orders, is that some traders search for hidden liquidity inside the bid-offer spread. Challet and Stinchcombe (2003) provide a brief analysis of limit order lifetimes from the Island order book, which is a part of NASDAQ. They find that the distribution of limit order lifetimes can be described by the power-law distribution with the simple fit P(Lt ) ∼ L−φ t , where Lt is the order lifetime and the exponent φ fits the distribution to the data. For deleted orders, the best fit is φ ≈ 2.1 (Challet and Stinchcombe, 2003, p. 142). Unfortunately, they do not provide any descriptive statistics, so that the properties of the dataset and especially its timeframe, remain unclear. Prix et al. (2007) research two weeks of late 2004 limit order data from the Xetra system, which is operated by Deutsche B¨orse AG. Within their dataset, the kernel densities of the lifetimes of no-fill-deletion limit orders show peaks at ‘special’ times. They find that there are local maxima at one, two, 30, 60, 120, and 180 seconds (Prix et al., 2007, pp. 727–729). They presume that the results are (at least partly) the effect of so-called ‘constant initial cushion’ (CIC) order strategies, which are algorithmically implemented and executed. The trader (or rather the automated trading system) places orders on the bid and on the offer side of the market but leaves a ‘cushion’ between the order and the best bid or offer. This trading strategy, which uses a network of bids and asks around the best bid and best ask, has been theoretically modelled by Handa and Schwartz (1996). Basically, traders who employ this strategy expect large market orders that consume more than the depth at the best bid or best offer in the order book pro-
Chapter 2. Order Microstructure
59
vides. Their CIC orders would then achieve a better price than they would have in the case of placing limit orders at the best bid or offer. In retrospect, the results of Prix et al. (2007) are not very surprising. It is rather easy to implement a strategy such as CIC; easier, for example, than to implement a strategy that takes into account large amounts of real-time data and to create a market model on which the algorithm trades. In addition to that, Deutsche B¨orse AG explicitly tries to attract algorithmic trading by offering special prices for large amounts of algorithmically induced orders. With Deutsche B¨orse’s ‘Automated Trading Program’ (ATP), market participants can save up to 50 per cent of the exchange’s fees if they sign a contract which says that their orders are algorithmically generated and the number of orders exceeds certain thresholds. As a result, algorithmic trading (identified by the ‘ATP-flag’ that is attached to the orders from the ATP participants) accounted for more than 40 per cent of the total trading activity in late 2008 (see Figure 1.1). Large (2004) creates a model with implications for the theoretical framework of the empirical analysis of this chapter and the next chapter. He models a limit order book and derives the behaviour of market participants in different market states and the individual reservation values of the traders. In addition, he develops a utility function of limit bids. The utility of a limit order depends on the probability that new limit orders arrive. It is a function of the anticipated intensities µs or µb of a Poisson processes, which describe aggregate sell or buy orders, respectively. The utility function of a limit order has an S-shaped form if it is put into a relationship with increasing intensities (Large, 2004, p. 18). One section of his paper considers fleeting orders (Large, 2004, pp. 20–26), where the author drops the assumption of constant optimisation problems in the basic model and the resulting never-occurring cancellations of limit orders. He interprets fleeting orders as a trial balloon. When entering the market, the trader does not know anything about the market and places a limit order. With changing market conditions, he or she adapts the expectations, cancels his or her limit order, and places a new one. The
60
2.2. Previous Work
important point is that the trader only cancels the order with the arrival of new information which changes the optimisation problem. It is noticeable that the author argues that there is never a replacement of an old limit order with a new one (Large, 2004, p. 22). The model assumes that limit orders are only cancelled in order to place a market order afterwards. He reasons that the new limit order would have a lower time priority than the one that was cancelled and that only a very small range around the best bid and offer is attractive to traders (Large, 2004, p. 22). Thus, at least in the model’s framework, a limit order is at most transformed into a market order, but never into a new limit order. Chain structures with systematically placed and cancelled limit order trading systems, such as found empirically by Prix et al. (2007) or theoretically explained by Handa and Schwartz (1996), are ruled out by this model. Chakrabarty and Tyurin (2008) examine the relationship of order aggressiveness and prevalence of fleeting orders for four randomly selected stocks, with Intel Corp. as one of the most liquid stocks. The result is a ‘non-linear relationship between order aggressiveness and times to cancellation’ (Chakrabarty and Tyurin, 2008, p. 5). The probability of an order to be fleeting, i.e., rapidly cancelled after its insertion (1) increases with the bid-ask spread, (2) decreases with the depth of the order book, and (3) increases with the recent transaction volume. Interestingly, the probability of the appearance of fleeting orders is more sensitive to executions against visible orders on the same side of the order book and more sensitive to the volume executed against hidden orders on the other side of the order book (Chakrabarty and Tyurin, 2008, p. 6). Examining data from July-September 2007 from the Spanish stock exchange, the Sociedad de Bolsas, Brusco and Gava (2006) find that rapid limit order cancellation activity is positively correlated with the spread, volatility, trading activity, and the previous submission of a market order. However, their dataset does not contain many limit orders that have lifetimes of under two seconds, so they redefine them as orders with a lifetime
Chapter 2. Order Microstructure
61
of under ten seconds. It cannot be ruled out that their data include many orders managed by human traders. With ever-faster computer and network technology, it was only a matter of time for the lifetime of orders to diminish. The creators of algorithmic trading systems develop models with more or less specific rules on how to behave in many possible market conditions. With improving technology, the systems can recalculate the market’s situation hundreds or thousands of times per second. With changing market conditions, the input for the model becomes different, and the system cancels the old limit order and places a new one according to the strategy formalised in the algorithm. In the following section, we will analyse the structure of the limit order lifetime, inter-order placement time, and order-revision time. We assume that we capture much of the active trading activity with these proxies.
2.3 Empirical Evidence from NASDAQ 2.3.1 Methodology and Data We use NASDAQ’s TotalView-ITCHSM 4.0-database, which stores the complete order book activity on the securities traded on NASDAQ. The dataset consists of several tables, two of which are of supreme interest to our research: add order messages and delete order messages. We do not consider cancelled orders, i.e., orders that are only partially revoked. With the dayunique order reference number that every newly inserted order receives, we can monitor the life of each added order, be it visible or invisible. For this analysis, we choose a timestamp precision of one millisecond (10−3 s). This is an advantage as compared to older papers that employ timestamp precisions of one hundredth of a second, because computers and network technology have dramatically improved the speed of data mining, analysis, order submission, and transmission. For this analysis, we do not need a better accuracy, because humans struggle even to realise changes that occur
62
2.3. Empirical Evidence from NASDAQ
in a ten-milliseconds timeframe. The next chapter covers the nanostructure of limit orders and employs a dataset with a timestamp precision of down to a nanosecond (10−9 s). NASDAQ, as do other exchanges, continues its efforts to decrease roundtrip times. The round-trip time – or latency – is the time a limit order needs to be processed by the exchange, i.e., the time it takes from the incoming signal carrying the new order until the stock exchange sends the order confirmation back to the client. In 2009, NASDAQ announced that it managed round-trip times of approximately 140 microseconds (NASDAQ OMX, 2010). Obviously, this is aimed at algorithmic traders, because human traders will presumably not be the main beneficiaries of such levels of round-trip times. Algorithms, however, can actually benefit from the difference between, for example, 500 and 200 microseconds. Other stock exchanges, for example the Deutsche B¨orse AG, also try to meet the requirements of their electronic clients. At the moment, round-trip times of one millisecond (or 1,000 microseconds) are being realised (Deutsche B¨orse, 2009). The enhancements are explicitly implemented to suit the needs of the algorithmic traders (Deutsche B¨orse, 2008), which makes it highly interesting for us to have access to data with a higher precision than previous papers. We analyse data from one week in November 2009 (the 9th to the 13th). In total, the data carry 121,809,254 add order messages and 116,678,065 delete order messages. To handle the data, we have chosen the 43 stocks with the most order activity. A small number of limit orders that have been added before 9 November appear in the number of deleted orders and a small number of added orders that are not deleted until Friday 13 November adds some to the number of added orders, so it is not entirely valid to say that 95.79 per cent of the limit orders were deleted without execution, but it can be safely assumed that the trade activity does not change too much within a week’s time. The stocks are given in Table 2.1.
Chapter 2. Order Microstructure
63
Table 2.1: Ticker symbols (alphabetically), company names and number of limit orders of the constituents of the dataset of the analysis No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
Ticker Symbol
Company Name
Limit Orders
ADBE AMAT AMGN AMTD ASML ATVI BRCD BRCM CENX CMCSA CSCO CTSH CTXS DISH DRYS DTV FITB GOOG HCBK HOLX INTC KLAC LBTYA LLTC MRVL MSFT MU NIHD NVDA ONNN ORCL PALM
Adobe Systems Inc. Applied Materials Inc. Amgen Inc. TD Ameritrade Holding Corp. ASML Holding NV Activision Blizzard Inc. Brocade Communications Systems Inc. Broadcom Corp. Century Aluminum Co Comcast Corp. Cisco Systems Inc. Cognizant Technology Solutions Corp. Citrix Systems Inc. DISH Network Corp. DryShips Inc. DIRECTV Fifth Third Bancorp. Google Inc. Hudson City Bancorp. Inc. Hologic Inc. Intel Corp. KLA-Tencor Corp. Liberty Global Inc. Linear Technology Corp. Marvell Technology Group Ltd Microsoft Corp. Micron Technology Inc. NII Holdings Inc. NVIDIA Corp. ON Semiconductor Corp. Oracle Corp. Palm Inc.
148,143 160,343 113,011 137,864 137,301 130,032 159,177 205,133 422,920 144,670 212,269 214,232 104,273 110,471 132,216 222,821 172,515 1,544,338 120,710 190,340 450,683 134,656 160,640 104,502 249,605 261,958 115,426 117,335 144,996 284,288 185,270 139,297
Continued on next page
2.3. Empirical Evidence from NASDAQ
64
Table 2.1 – continued from previous page No. 33 34 35 36 37 38 39 40 41 42 43
Ticker Symbol
Company Name
PCAR PTEN QCOM QLGC SPLS STLD STX SWKS UAUA XLNX ZION
PACCAR Inc. Patterson-UTI Energy Inc. QUALCOMM Inc. QLogic Corp. Staples Inc. Steel Dynamics Inc. Seagate Technology PLC Skyworks Solutions Inc. United Continental Holdings Inc. Xilinx Inc. Zions Bancorporation
Limit Orders 111,927 159,546 279,726 104,037 117,294 254,719 196,463 103,299 173,188 103,127 182,422
We use three different proxies to look for evidence of algorithmic trade activity. First, we analyse the order lifetimes, i.e., the time between the placement and the deletion of a limit order. Due to the unique order reference number, this can be measured accurately and without any noise from interfering market participants. The hypothesis is that the more fleeting the order flow, i.e., the larger the proportion of very short-lived orders, the more algorithmic activity there is in the market. This reasoning comes from the secrecy of algorithmic traders. In highly liquid stocks with many limit orders, order strategies can be camouflaged much more easily than in stocks with only a few different market participants and small numbers of orders. This proxy has already been used by Prix et al. (2007, 2008), though in a slightly different context. Second, we calculate the time between the placement of incoming orders. It is possible that an algorithm is either placing limit orders very continuously, say, every 100 milliseconds, or places limit orders after a period of placement inactivity in order to test the market’s reaction on that limit order. This proxy is subject to noise, because there are many market participants who insert limit orders. For example, it is possible that one market participant, A, places a limit order constantly every 500 millisec-
Chapter 2. Order Microstructure
65
onds. With an increasing number of market participants, it becomes less likely that A’s individual placement activity is measurable, because on average, limit orders are added more frequently than every 500 milliseconds. If A places an order and trader B coincidently places a limit order, say, 324 milliseconds after him or her, this coincidental value finds its way into the dataset. Our third proxy measures the time between the deletion of a limit order and the insertion of the next limit order. If an algorithm places a limit order and deems it unsuitable for the changed market after some time, the probability is high that it will delete it and insert a new limit order with adjusted properties. Of course, this proxy is subject to noise as well, but if the algorithms replace these orders rapidly enough, there remains information in the data on the algorithmic reinsertion time because the likelihood of interference from other traders diminishes. The explanation is analogous to the explanation for inter-order placement times. Table 2.2 summarises the different modes that are employed in this paper. Table 2.2: Modes employed to analyse the order structure and detect evidence of algorithmic trade activity
(1) (2) (3)
Mode
Start
End
[Add → Delete] [Add → Add] [Delete → Add]
Add Order Message Add Order Message Delete Message
Delete Message Add Order Message Add Order Message
The data are organised in trade messages. In the first mode, we search for an add order message for a specific stock mnemonic, take its timestamp, and look for the timestamp of its deletion. By subtraction of the timestamps, we calculate the time that the limit order has ‘lived’. The other modes are calculated analogously: for the second proxy, we look for an add order message and take the time until the next add order message for the
66
2.3. Empirical Evidence from NASDAQ
same stock arrives. Finally, for the limit order-revision time, we take the timestamp of a limit order deletion and measure the time that passes until the next added limit order for the same stock. Often, the last two proxies will capture algorithmic trading but sometimes will capture only pure coincidental values. However, the data suggest that the proxies can mostly capture trading strategies and are not only showing haphazard activity.
2.3.2 Add Order → Delete Order This section provides the results of the first order structure mode, i.e., the analysis of the limit order lifetime. Due to the above-mentioned millisecond timestamp, it is possible to analyse them accurately. The order reference number enables us to perform this analysis without any noise. Every deletion of a limit order in this dataset is based on a decision of the market participant who added the limit order to the market. A deletion forced by the stock exchange because of a violation of the rules of the market is captured by a different trade message than the deletion message that is used in this case. That means we examine active deletions of limit orders. Following Smith (2010), we look at each day of the week independently; we do not aggregate the data of the individual stocks across the five trading days in the dataset. Limit orders that live longer than one trading day are omitted from the analysis, so only intraday orders find their way into the dataset. By taking each day separately, we receive five vectors with order lifetimes per stock. Figure 2.1 contains histograms of nine randomly selected stocks of the sample of 43 with different levels of limit order activity. It is obvious that most limit orders are deleted shortly after their placement, sometimes within the matter of only a few milliseconds. It is noticeable that there are peaks in most of the histograms at 100 milliseconds. This implies that there is at least one algorithm that places a limit order and waits exactly 100 milliseconds for a market reaction on the new order. If there is no
Chapter 2. Order Microstructure
67
reaction, it deletes the limit order. Some algorithms are obviously not that patient and send a deletion notice right after the placement of the limit order without waiting for the market to react: if the limit order is deleted within one millisecond or less – the data proves that this happens very often – the stock exchange’s system latency for most market participants is too slow to allow such high-frequency trades. We analyse these orders more intensively in Chapter 3. As can easily be seen in the histograms, traders delete their orders shortly after their insertion. While Hasbrouck and Saar (2001, 2009) speak of fleeting orders when the limit order lifetime is less than or equal to two seconds, the histograms show that two seconds is already a long time for a limit order to live if the trading strategy is an ultra-high-frequency one – which seems to be prevalent in day-trading. Nonetheless, it is highly improbable that these rapid order-deleting speeds are performed by human traders. They are rather the product of the above-mentioned algorithms. Note that with such short-lived orders, it is possible that some market participants see an out-dated order book. This arises from the fact that the speed of light is roughly 300,000 km/s, and the signals carrying the order information in wires or fibre are even slower. To illustrate the problem: if there is a trader in, say, Chicago, he is around 1,160 km away from New York where NASDAQ’s servers are located. Thus, the signal carrying the information about the new order needs around four milliseconds to reach him or her – not even considering the slower speed of light in fibre and time-consuming network technology such as routers. This means that the trader looks at an order book which is at least four milliseconds old. In many cases, limit orders have already been deleted three milliseconds before the Chicago trader sees it. In this time, the order book could have changed the best bid and offer and a trader would then possibly buy or sell the security for an undesired price. As for London, UK, the signal would need at least 18.6 milliseconds. The question remains why such a large proportion of limit orders is being deleted shortly after insertion. Hasbrouck and Saar (2009, p. 154)
.1 .2 .3 .4 .5 .6
.8 .6 .4
2500
.2 .8
1
0
.2
.4 .6 LLTC
.8
1
.4
.5 .4 .6 CSCO
.3
.3
1000 0 0 200 400 600 800
.4 0
.2
.4
.6
.8
.2
0 500 1000 1500 0
.2
.2
1
0
.1
.8
1
.6
.4 .6 DISH
0
0
.2
.4
.6
.8
2500 5000 7500 0
.2
.6
.8
0
.8
.4
1
1
.4 .6 ONNN
.2
.8
.8
.2
0
.4 .6 ADBE
.4 .6 MSFT
500 1000 1500
.2
.2
0
0
0
0
600
1
1
400
.8
.8
200
.4 .6 CTSH
.4 .6 CENX
0
.2
.2
.5
0
0
.4
1
.2
.8
.1
.4 .6 INTC
2000
.2 .3 .4 .5 .6 .7
10000 5000 0
.2
2000 4000 6000
0
5000
2.3. Empirical Evidence from NASDAQ
68
Figure 2.1: Histograms of order lifetimes of stocks listed on NASDAQ with different numbers of observations. X-axis: limit order lifetime in seconds, left ordinate: number of observations. Right ordinate, dashed line: cumulated share of observed number of lifetimes relative to total inserted limit orders. The number of limit orders placed for the individual stock decreases from left to right and from top to bottom: INTC (Intel Corp.) 167,014 obs., CENX (Century Aluminum Co.) 86,681, ONNN (ON Seminconductor Corp.) 60,051, CTSH (Cognizant Technology Solutions Corp.) 57,131, MSFT (Microsoft Corp.) 43,406, CSCO (Cisco Systems Inc.) 30,446, ADBE (Adobe Systems Inc.) 26,274, DISH (DISH Network Corp.) 19,249, and LLTC (Linear Technology Group) 14,936.
provide three hypotheses that were described earlier. Various other hypotheses are conceivable as well. For example, the so-called quote-stuffing is a strategy that aims at wasting the calculation time of the exchange or of other traders. If a trader places many limit orders for one stock, the other traders have to read the incoming orders, analyse their effect on their
Chapter 2. Order Microstructure
69
model, and react accordingly. This time advantage can be traded upon, so there may be an incentive in some situations to place many short-lived orders to distract others. Although the distribution of limit order lifetimes looks like the powerlaw distribution Challet and Stinchcombe (2003) describe, the Weibull distribution better fits the data. Ni et al. (2010) show that the Weibull distribution is the best fit for inter-cancellation duration on the Shenzen Stock Exchange. Inter-cancellation duration measures the time between two successive deletions of limit orders for a stock, in the system described in this chapter, this limit order structure would be [Delete → Delete]. We do not consider inter-cancellation duration, however, because we do not see that we could detect algorithmic trade activity with it. To perform a more formal analysis than just eye-balling some histograms, we analyse the data in a more general way. We transform the original data to make it comparable with the estimation results, which yields a survival probability function of the form (see Cleves et al., 2002, p. 212):
P(T > t) = S(t) = exp(− exp(β0)tp).
(2.1)
First, we count the number of occurrences of the different intervals, i.e., limit orders that have a lifetime L of t milliseconds, 0.001 · (t − 1) < Lt ≤ 0.001 · t, with t = 0, . . . , tmax ,
(2.2)
where tmax signifies the longest limit order lifetime of the particular stock on the respective day. This can be used to create a histogram of order lifetimes. We then calculate the frequency relative to the total number of deleted limit orders on that day, Lt ∑tmax t=0
Lt
, t = 0, . . . , tmax .
(2.3)
To make the histogram compatible with the Weibull distribution, we take the first difference of (2.1), because S(t) only gives P(T > t), i.e., the
70
2.3. Empirical Evidence from NASDAQ
probability that the limit order lives longer than t milliseconds. The probability for a lifetime of a limit order is the probability that the limit order lives longer than t but not longer than t + 1, that is
P(t < T ≤ t + 1) = S(t) − S(t + 1).
(2.4)
To measure the goodness of fit at t, g(t), we calculate the difference of the actual data and the fitted Weibull distribution, g(t) = Lt − P(t < T ≤ t + 1) = Lt − S(t) + S(t + 1).
(2.5)
A t-test shows if the goodness of fit g(t) is significantly different from zero or a reasonably small value. At the same time, the difference of the fit and the actual data can prove the existence of peaks at certain values of t that have been described earlier. The one-sample t-test can show significantly greater values at a ‘special’ time than in its vicinity – for example, a positive value for g(100), i.e., an order lifetime of 100 milliseconds and values close to zero for g(100 ± x), x > 0. Figure 2.2 shows a set of boxplots for the proxy [Add → Delete]. The timestamps are 100, 250, 500, 750, 1,000, and 2,000 milliseconds. We choose these values for t to make a comparison with Prix et al. (2007), who find peaks in the distribution of limit order lifetimes of German Blue Chips at 1,000 and 2,000 milliseconds (or two seconds) and at multiples of 250 milliseconds (Prix et al., 2008). For each timestamp, we create five boxplots using the following method: in the middle, there is the actual timestamp under consideration, as indicated above by the respective figure. The two boxplots on the left (right) depict the values of g(t) minus (plus) two milliseconds and minus (plus) 40 milliseconds. If there is a peak, the boxplot of the timestamp under consideration will be significantly different than the other ones. Only the peak at 100 milliseconds seems to be statistically relevant. At the other points in time, no accentuated difference between the different values of g(t) around the special time are identifiable. This is true for both
Chapter 2. Order Microstructure
71
Figure 2.2: Boxplots of g(t), i.e., the difference between the actual values of the limit order lifetimes and the Weibull distribution at lifetime t. In each sub-figure, the ‘special’ time is in the middle of the set of five box plots. To the left, the figure shows the box plot of a limit order lifetime with two milliseconds less, and on the very left a limit order lifetime of 40 milliseconds less than the special time in question. Analogously, the two box plots on the right are the ones with a limit order lifetime of two and 40 milliseconds more than the special time in question. For example, in the case of a special time of 100 milliseconds, on the very left is the box plot of a limit order lifetime of 60 milliseconds, next to it the one with 98 milliseconds, in the middle is the box plot with a 100 milliseconds lifetime, and to the right follow 102 and 140 milliseconds. the order lifetime and the inter-order placement time, which will be shown in the following subsections. The t-test with a null hypothesis of the variable being statistically equal to zero is rejected in almost all cases, indicating that the Weibull fit is not
72
2.3. Empirical Evidence from NASDAQ
perfect. But because the values of the deviation of the real values and the fitted function are very small (usually between 0.001 and 0), the fit is still good, even though it tends to slightly overestimate the actual values. At values for the lifetime that are very close to zero, the fit trades accuracy against a better fit for greater values of t. At 100 milliseconds, the parametric fit fails to catch the peak that many stocks show. The result is a significantly greater difference between the actual values and the fit, and thus there is a greater value for g(100) than for the values around 100. To look for local maxima in the histograms of this and the other proxies, a t-test is employed. The null hypothesis is, H0 : g(T ) =
T +n 1 ∑ g(t), 2n t=T −n
(2.6)
t̸=T
that is to say, the value at T is equal to the average of the n values around it. ∑ The alternative hypotheses are H1,1 : g(t) ̸= 1/2n g(t), H1,2 : g(t) < ∑ ∑ 1/2n g(t), and H1,3 : g(t) > 1/2n g(t). The following table shows the results of the t-tests for ‘special’ times with n = 5. The t-tests show that for the timestamps under consideration, only the peak at 100 milliseconds that is also visible in the exemplary histograms is statistically significant. It seems possible that some stocks show peaks in their limit order lifetime density at other timestamps as well – the t-test for t = 1,000 misses the 10 per cent level of significance only by a small amount. Multiples of 1,000 milliseconds seem to be the most promising to look for peaks elsewhere than at 100. Multiples of 250 do not show a significant trace of algorithmic trading that relies on the placement and subsequent or immediate deletion of limit orders. This could imply that the more advanced a limit order system of the traders and the stock exchange and the lower the exchange’s latency is, the closer are the peaks in the limit order lifetime to zero. This hypothesis is based on the two earlier papers by Prix et al. (2007, 2008). In their 2007 paper, they find peaks at longer limit order lifetimes than in the newer one
Chapter 2. Order Microstructure
73
Table 2.3: Results of the t-tests for peaks at special times of t of the limit order lifetime. The first column shows the limit order lifetime which is to be tested for peaks. The second to fourth column show mean, standard error and 95 per cent confidence interval of the actual values for that specific limit order lifetime. The fifth column shows the average value of the ten values around the limit order lifetime under consideration. The sixth col∑ umn shows the p-value for H1,3 : g(t) > 1/10 g(t). ***, **, * indicate significance at the 1, 5, and 10 per cent levels, respectively. t 100 250 500 750 1,000 2,000
Mean
Std. Err.
95% Conf. Interval
.00107 .00005 .00005 .00003 .00003 .00002
.00009 .00001 .00000 .00000 .00000 .00000
(.00089, .00124) (.00003, .00007) (.00004, .00006) (.00002, .00004) (.00003, .00004) (.00002, .00003)
1/10
∑
g(t)
P(T > t)
.00023 .00005 .00004 .00003 .00003 .00002
.0000*** .5000 .3058 .5436 .1246 .2221
– the more recent NASDAQ data could show the future of the Deutsche B¨orse’s Xetra system’s lifetime dynamics. An important property of the order lifetime structure is that the more order activity there is for a specific stock, the more fleeting the limit orders become. This is in line with the hypothesis that the creators of trade algorithms try to keep the inner workings of the algorithms secret. This secrecy is necessary because if another market participant could reverse engineer the algorithm, he or she could profit from the algorithm from front-running it or triggering its trade signals. Therefore, it is probable that high-frequency trading strategies are more active in very liquid stocks with a lot of limit order activity, so that the algorithm’s limit orders are not easily distinguishable from the other limit order activity. Table 2.4 shows the results of four univariate regressions of the percentage of the limit orders with a lifetime of lower or equal to t milliseconds, t = {100, 500, 1,000, 2,000}, in per cent of the total number of no-fill dele-
2.3. Empirical Evidence from NASDAQ
74
tion orders versus the log of the number of no-fill-deletion orders. Still, the underlying dataset contains the 43 stocks with the highest limit order placement activity. Each trading day of the week is examined separately. The data were slightly winsorised, yet only observations that would distort the slope of the regression line in favour of the hypothesis were deleted. For example, limit orders for Google Inc. (GOOG) – by far the most active stock in the sample – have a probability of over 95 per cent that a limit order is deleted within half a second. These five values (five trading days) would have distorted the slope of the regression, so we disregard them in all the individual regressions. This explains why the values for N are not equal to 43 · 5 = 215. Table 2.4: Results of the regression of the relative amount of limit order deleted within 100, 500, 1000, and 2000 milliseconds, y = α + β ln x + ϵ. Robust standard errors are in brackets below the estimated value. ***, **, * indicate significance at the 1, 5, and 10 per cent levels, respectively. Milliseconds
100
500
1,000
2,000
Independent Variable
.0856*** (.0152)
.1106*** (.0198)
.1231*** (.0193)
.1063*** (.0188)
Constant
-.5365*** (.1549)
-.5974*** (.2030)
-.6392*** (.1976)
-.3936** (.1933)
201 .1838
201 .1688
209 .1747
209 .1414
N R2
Among all lifetimes under consideration, the relative amount of shortlived limit orders increases with a more intense limit order placement activity. The sample of 43 stocks and five trading days is rather small, but the results are in line with the findings of Hasbrouck and Saar (2009). It goes without saying that the negative value for the constant term is not coherent, because the relative amount of fleeting orders cannot be
Chapter 2. Order Microstructure
75
lower than zero. We assume that the real regression cannot be linear but follows a rough S-shaped form starting close to zero and converging to a value that approaches one the longer the lifetime is considered. These results can be used as a first indication that with more limit order activity, the proportion of high-frequency trading and therefore the proportion of algorithmic trading also increases. However, it is important to keep in mind that these limit orders are deleted without execution, so to be more precise, there is an increase in algorithmic activity, and not in algorithmic trading, because trading implies executed trades. It would be interesting to compare this and the other proxies with data that contain flags for algorithmically placed limit orders. With a control dataset at hand, one could estimate a functional form for limit order lifetime in order to obtain a rough measure of algorithmic activity – which would be very useful for regulators, traders, and researchers.
2.3.3 Add Order → Add Order A great deal of research has been performed on inter-trade duration, i.e., the time between two order executions. This section, however, measures and analyses the duration between the subsequent placement of two limit orders, without consideration of its trade direction, that is whether it is an order to buy or to sell stocks. With the order lifetime patterns found by Prix et al. (2007, 2008) we would not be surprised to find patterns in this mode as well. The reason for finding patterns could be that some algorithms start ‘test balloons’ after a pre-defined (or market-dependent) time without newly submitted orders. By placing a limit order, the algorithm can test the market’s reaction to generate private information. We perform this analysis using the same 43 stocks that were used in Section 2.3.2. These are the most active stocks listed on NASDAQ. Aside from Google Inc. (GOOG), Intel Corp. (INTC) has the largest number
76
2.3. Empirical Evidence from NASDAQ
of observations (179,921), and LLTC (Linear Technology Group) has the smallest number of observations with 16,323 measured intervals per day. The securities show peaks in the histograms, while the peaks generally seem to become more distinct for less-traded securities. As shown in Figure 2.3, the three histograms at the bottom of the figure show clearer peaks at 100 milliseconds than the rest, with the exception of CTSH (Cognizant Technology Solutions Corp.). These three stocks show the smallest number of added limit orders in this subsample. For the most liquid securities with more than 60,000 added orders per day, the median value of the times between two added limit orders is equal or close to one millisecond. For the less liquid securities with 40,000 to 60,000 limit orders per day, the median value ranges between one and 100 milliseconds. The median values for the least active stocks in this sample (15,000 to 40,000 limit orders) vary between 100 and 400 milliseconds. The median values behave quite as one would expect: they broadly increase with decreasing limit order activity, which indicates that the order activity is spread more or less evenly over the days. The inter-quartile ranges behave analogously – they become larger with a decreasing limit order activity. While some securities do not show any surprising peaks in their densities of the inter-order placement times, others reveal stark irregularities. Two notably special times of limit order placement inactivity are 100 and 1,000 milliseconds. To the best of our knowledge, these patterns have not been described so far. Figure 2.3 shows typical histograms for inter-order placement times. Note the differences between the stocks: the most liquid securities do not reveal striking peaks. This is presumably the case because the more active the stocks are, the more noise from other algorithms hinders an expost examination of inter-order placement times. INTC, the security with the highest degree of limit order placement activity in this sample of nine, only shows small peaks that can easily be missed – or they are just invisible
.8
1
10000
.6 .4 .2 .8
1
0
.2
.4 .6 CSCO
.8
1
0
.2
.4 .6 LLTC
.8
1
.6
.8
.5
.6
.7
.8
.9
.4 .6 ONNN
.4
200 400 600
.2
.2
1
0
0
.8
3000 6000 9000
.4 .6 DISH
0
.2 .2
5000
.4
.6
7500 0 500 1000 1500 0
0 .2 .4 .6 .8 1 .4
.6
.8 .2 0
2000 0 1000 2000 3000 0
0
0
1
1
1
.8
.8
.8
.4 .6 ADBE
.4 .6 MSFT
.6
.2
.2
.4
0
0
.2
1
1
.8
.8
.8
.6
.4 .6 CTSH
.4 .6 CENX
.4
.2
.2
.2
0
0
0
1
18000
.8
9000
.4 .6 INTC
0
.6
.7
.8
30000 0
.2
4000
0
0
.8
1
15000
77
.9
1
60000
Chapter 2. Order Microstructure
Figure 2.3: Histograms of inter-order placement times of stocks listed on NASDAQ with different numbers of observations. X-axis: limit order lifetime in seconds, ordinate: number of observations. The dashed line shows the cumulated share of inter-order placement time observations t relative to the total number of observations over the day (ordinate on the right side) The number of observations decreases from left to right and from top to bottom: INTC 179,921 obs., CENX 87,704, ONNN 62,246, CTSH 59,471, MSFT 49,757, CSCO 40,881, ADBE 30,011, DISH 21,650, and LLTC 16,323. because the timestamp precision is not high enough. It is possible that there are peaks that we miss because values of under one millisecond are shown as one millisecond; again, Chapter 3 conducts an analysis on these regions. Nonetheless, there are more added orders for INTC at 100 milliseconds (plus a small range of about ±5ms) than at 110 or 90 milliseconds. This peak, however, is not as accentuated as it is with other stocks. At 1,000 milliseconds, there is another peak, which is much more visible. This in-
78
2.3. Empirical Evidence from NASDAQ
dicates that some algorithms are programmed to place limit orders very regularly. Because of the network delays, we would expect those peaks a little later than 100 milliseconds, perhaps at 101 or 102 milliseconds. The information that there have been no further limit orders placed needs some time, as does the placement of the limit order itself. So it is possible that the peaks are not market-induced but only a static algorithm order placement strategy – or both. We find the distribution of the inter-order placement times rather surprising; we expected some kind of normal distribution close to zero and perhaps one or more peaks. However, the actual distributions usually show a distribution that can best be fit with a Weibull distribution, which is also the case for the limit order lifetime. Its scale parameter β0 roughly decreases with a higher limit order activity, meaning there is relatively more density at values close to zero and less at other values. Figure 2.4 shows boxplots comparable to the limit order lifetime in Section 2.3.2. The boxplots for ‘special’ times (100, 250, 500, 750, 1,000, and 2,000 milliseconds) depict the differences between the actual values and the Weibull fit. The fits are not as accurate as they are for the limit order lifetimes, but it is still a reasonable distribution to model inter-order placement times. Additionally, there is a peak for the 100 millisecond interorder placement time. To check for peaks in the distribution that may have been missed in the box-plot visualisation, we perform the same series of t-tests with these data as for the limit order lifetimes. Table 2.5 shows the results of the t-tests on the difference between the actual values of the inter-order placement times and the average value of the Weibull fit, 1/2n
T∑ +n
g(t).
t=T −n t̸=T
The t-tests and their hypotheses are structured in the same way as in Section 2.3.2. Again, we choose a value of n = 5.
Chapter 2. Order Microstructure
79
Figure 2.4: Boxplots of g(t), i.e., the difference between the actual values of the inter-order placement times and the Weibull distribution at lifetime t. In each subfigure, the ‘special’ time is located the middle of the set of five boxplots. To the left, it shows the boxplot of an inter-order placement duration of 2 milliseconds less, and on the very left a limit order lifetime of 40 milliseconds less than the special time of interest. For example, in the case of a special time of 100 milliseconds, on the very left is the boxplot of an inter-order placement time of 60 milliseconds, next to it the one 98 milliseconds, in the middle is the boxplot of 100 milliseconds duration, and to the right follows 102 and 140 milliseconds.
As for the limit order lifetimes, the only significant peak is at 100 milliseconds. Again, 1,000 and 2,000 milliseconds are promising, but fail to be statistically significant. However, some stocks show distinct peaks at 1,000 milliseconds, but they are not as prevalent as the 100 millisecond peaks.
2.3. Empirical Evidence from NASDAQ
80
Table 2.5: Results of the t-tests to test for peaks at special times of t of the inter-order placement times. As in Table 2.3, the first column shows the inter-order placement time that is to be tested for a local maximum. The second to fourth column show the mean, the standard error and the 95 per cent confidence interval of the actual values for that specific limit order lifetime. The fifth column shows the average value of the ten values around the limit order lifetime under consideration. The sixth column shows the ∑ p-value for H1,3 : g(t) > 1/10 g(t). ***, **, * indicate significance at the 1, 5, and 10 per cent levels, respectively. t 100 250 500 750 1000 2000
Mean
Std. Err.
95% Conf. Interval
-.00265 -.00126 -.00037 -.00012 -.00004 -.00000
.00005 .00002 .00001 .00000 .00000 .00000
(-.00274, -.00256) (-.00130, -.00122) (-.00039, -.00034) (-.00013, -.00011) (-.00004, -.00003) (-.00000, -.00000)
∑
g(t)
P(T > t)
-.00281 -.00130 -.00037 -.00012 -.00004 -.00000
.0005*** .4914 .4844 .3311 .2008 .2376
1/10
Chapter 3 provides research on the values close to zero with a dataset that possesses more precise timestamps than one millisecond. We expect to find some more peaks around zero, because with today’s computer technology, it would not be surprising to see algorithmic traders take on microsecond trading strategies.
2.3.4 Delete Message → Add Message In this section, we analyse order-revision times, i.e., the time that passes between a deletion and the next inserted limit order for the same stock. The reasoning behind this mode is that algorithms that place limit orders and delete them shortly afterwards should be quite likely to add another order afterwards with adjusted limit order properties. They will either do this right away or wait some time to see how the market reacts to the deletion of the limit order. Again, after having seen that there are patterns in the
Chapter 2. Order Microstructure
81
[Add → Delete] and [Add → Add] modes, which were discussed in 2.3.2 and 2.3.3, it seems likely that there are patterns in this mode as well. As already mentioned, in this mode as well as in the [Add → Add] mode, there is the possibility that noise from other traders prevents us from operating with an exact dataset. For example, if a trader decides to delete his or her limit order, it is possible that other traders place a limit order before he or she places a new one. This sometimes prevents us from measuring the original intention of the traders. One could think of an extreme case in which an algorithm is programmed to add a limit order exactly 100 milliseconds after the deletion of a precedent one. This algorithm would leave no trace in our database if there were one or more traders who place limit orders within less than 100 milliseconds after the deletion of the original limit order. Nonetheless, the data indeed reveals patterns. Again, 100 milliseconds seem to be an appealing time to act for algorithms (or their programmers). Figure 2.5 shows histograms of the nine stocks that were also used for the other modes. It is obvious that some stocks show a clearly visible peak in their distribution. Especially striking are the peaks for the stocks with lesser limit order activity, possibly due to the reduced amount of noise interfering with the analysis. What becomes apparent, however, is that in most cases, new orders come in after only a few milliseconds. Of course, this could be a coincidence, but again we would have expected a normal distribution, e.g., with a mean of the number of limit orders on that day divided by the number of trading hours. Perhaps this distribution should be skewed to the left because trades and liquidity attract trades. However, the data shows that in most cases the distribution is not normal at all. It is, therefore, likely that algorithms delete the original order and place another order more or less immediately. This consideration is supported by the analysis of the limit order lifetime structure, which shows that the median lifetime is often only a few hundredths of a second. As for the other modes, Chapter 3 describes sub-millisecond region more accurately.
1
1 .8 .2
.4
.6
7500
.4 .6 CSCO
.8
1
0
.2
.4 .6 LLTC
.8
1
.6 .4 .2 0 .8 .6
.6
.4 .2
.4
1000
0
0
.2
0
0
0
.2
.4
1000
.2
.8
.8
0
0 .2 .4 .6 .8 1
15000 7500 0
.4 .6 DISH
.6
.8
.2
0
5000
0 .2 .4 .6 .8 1
0 2000
0
1
2500
1
1
.8
0
.8
.8
.4 .6 ONNN
1000
.4 .6 ADBE
.4 .6 MSFT
.2
500
.2
.2
.9
0
0
0
.8
1
1
.7
.8
.8
.6
.4 .6 CTSH
.4 .6 CENX
.5
.2
.2
.8
0
0
20000
1
10000
.8
0
.4 .6 INTC
2000
0 .2 .4 .6 .8 1
40000 20000 0
.2
3000 6000 9000
0
15000
2.3. Empirical Evidence from NASDAQ
82
Figure 2.5: Histograms of order-revision times ([Delete → Delete]) of nine different stocks with different levels of limit order activity. X-axis: limit order lifetime in seconds, ordinate: number of observations. The dashed line shows the cumulated share of revision times t relative to all observations over the day (ordinate on the right side) Highest limit order activity: INTC (Intel Corp, 167,013 observations), lowest limit order activity LLTC (Linear Technology Group, 14,931 observations). Histograms show limit order-revision time of under one second because the majority of observations falls into this timeframe. The visual evidence is again put to a test with the same approach already used for the limit order lifetime and inter-order placement time. Table 2.6 shows the results of the t-tests for the special times under consideration. In this case, however, the Weibull fit is not as good as for the other two modes, so the t-test does not take the average difference between the fit and the actual data around T as a reference point but the average of the actual values around T . This makes the test a little worse, but in combina-
Chapter 2. Order Microstructure
83
tion with Figure 2.6, the results of Table 2.6 can be interpreted accordingly. The null hypothesis is here H0 : RT = 1/2n
T∑ +n
Rt
t=T −n t̸=T
0
.00005
.0001
.00015
.0002
where Rt is the number of occurrences of order-revision times of t milliseconds relative to the total number of added orders for the specific stock on that trading day.
0
200
400
600 milliseconds
800
1000
Figure 2.6: Line chart of the average relative amount of the limit orderrevision time of 43 stocks and 5 trading days. On the x-axis is the revision time in milliseconds. The y-axis shows the average of the relative number of limit orders revised after t milliseconds. For example, in about 0.02 per cent of the cases it took exactly 100 milliseconds until a new order was placed, which causes the striking peak.
Std. Err. 3.18×10−6 1.33×10−5 9.15×10−7 7.25×10−7 6.17×10−7 5.56×10−7 4.91×10−7 3.98×10−7 3.38×10−7 2.93×10−7 3.37×10−7 3.79×10−7 4.63×10−7 2.36×10−7 2.35×10−7 2.37×10−7 2.47×10−7
Mean @ Rt
8.88×10−5 2.12×10−4 2.52×10−5 1.88×10−5 1.65×10−5 1.32×10−5 1.11×10−5 8.40×10−6 6.98×10−6 5.98×10−6 5.93×10−6 6.66×10−6 7.77×10−6 4.36×10−6 3.39×10−6 2.93×10−6 3.34×10−6
t
50 100 200 250 300 400 500 600 700 750 800 900 1000 1100 1500 1900 2000
(8.25×10−5 , 9.50×10−5 ) (1.86×10−4 , 2.39×10−4 ) (2.34×10−5 , 2.70×10−5 ) (1.74×10−5 , 2.02×10−5 ) (1.53×10−5 , 1.77×10−5 ) (1.21×10−5 , 1.43×10−5 ) (1.02×10−5 , 1.21×10−5 ) (7.62×10−6 , 9.19×10−6 ) (6.31×10−6 , 7.64×10−6 ) (5.40×10−6 , 6.55×10−6 ) (5.26×10−6 , 6.59×10−6 ) (5.91×10−6 , 7.41×10−6 ) (6.83×10−6 , 8.68×10−6 ) (3.89×10−6 , 4.82×10−6 ) (2.92×10−6 , 3.85×10−6 ) (2.46×10−6 , 3.40×10−6 ) (2.86×10−6 , 3.83×10−6 )
95% Conf. Interval
∑ Rt
8.1×10−5 1.0×10−4 2.3×10−5 1.8×10−5 1.6×10−5 1.2×10−5 1.0×10−5 7.6×10−6 6.3×10−6 5.9×10−6 5.5×10−6 5.6×10−6 5.9×10−6 3.9×10−6 2.9×10−6 2.5×10−6 2.5×10−6
1/10
.0072*** .0000*** .0052*** .2425 .0002*** .0480** .0439** .0226** .0186** .4449 .1171 .0034*** .0000*** .0329** .0272** .0433** .0004***
P(T > t)
Table 2.6: Results of the t-tests to test for peaks at special times of t of the limit order-revision times. As in tables 2.3 and 2.5, the first column shows the order-revision time which is to be tested for a local maximum. The second to fourth columns show the mean, the standard error and the 95 per cent confidence interval of the actual values for that specific limit order lifetime. The fifth column shows the average value of the ten values around the ∑ limit order lifetime under consideration. The sixth column shows the p-value for H1,3 : g(t) > 1/10 Rt . ***, **, * indicate significance at the 1, 5, and 10 per cent levels, respectively.
84
2.3. Empirical Evidence from NASDAQ
Chapter 2. Order Microstructure
85
Table 2.6 shows more values for t than the other ones because there are more peaks. For the sake of completeness, the timestamps that were provided in the other t-test tables are included in Table 2.6 as well, although R250 and R750 do not show any peaks with this proxy. It is apparent that for nearly all multiples of 100 milliseconds, there is a measurable peak, even though it is not clearly visible in Figure 2.6. The only clearly visible peaks are 50, 71, and 100 milliseconds, but the peak at 71 milliseconds stems from a single stock (ASML - ASML Holding NV), which distorts the mean value. There are also visible peaks at 200, 900 and 1,000 milliseconds, but to a much smaller extent. Still, given the noise that results from the lack of a market participant identifier and the consequences described above, some peaks are distinct. For some levels of t, the level of significance blurs the difference between the average around the special time and the mean at the special time, the peaks are usually not very clearly visible. Today’s computers and network technology make it easy to place thousands of limit orders per second. The peaks at multiples of 100 milliseconds imply that some trade algorithms wait for a reaction from the market on the deletion of the limit orders until they place a new limit order. This reaction could be any change in the order book, the speed of incoming and deleted limit orders, a change in the limit order book of a correlated stock, or a change in some other benchmark, to name a few.
2.4 Conclusion We looked for traces of algorithmic trading from raw order-book data. As is the case for most other market participants, the dataset carries all historical order book information and high-precision timestamps, but it contains no identifier of the party that places the orders. Thus, algorithmic trade activity usually has to be measured indirectly with the use of proxies. We try to detect this computerised trade activity by searching for irregularities in the shapes of the densities of three different proxies. In this paper, we
86
2.4. Conclusion
use the limit order lifetime, the inter-order placement time, and the orderrevision time. We employ NASDAQ’s ITCH data, a limit order book protocol, which stores order book activity in different messages. For this analysis, we use timestamps with a precision of a millisecond, enabling us to analyse the data in a very detailed way; previous studies mostly analysed data with a precision of only 1/100 of a second. The level of precision that we use suffices for orders structures that appear in regions that humans still can perceive. Unlike Prix et al. (2007, 2008), we do not find any significant peaks at two or at multiples of 30 seconds or 250 milliseconds in the limit order lifetime distributions. On the contrary, we only find one significant peak at 100 milliseconds. The difference may stem from the fact that our dataset is from late 2009, whereas Prix et al. (2007) analyse data from late 2004 and Prix et al. (2008) data from 2006. Up to five years mean great advances in information technology. With continuously decreasing latency times, ever faster reacting algorithms can be implemented. Since NASDAQ has latencies of some 200 microseconds and Deutsche B¨orse latencies of a few milliseconds, it is not surprising to find traces of faster trading on the former stock exchange. It would be interesting to conduct limit order lifetime analyses over time to see its development in the light of different market phases, for example. The second mode that we discuss is inter-order placement times, which we use to measure the time that passes between the insertion of two limit orders for the same stock while not considering the trade direction. This mode also shows a distinct peak at 100 milliseconds. This indicates that some algorithm either places a limit order every 100 milliseconds or waits 100 milliseconds after the placement of the last limit order (of whichever market participant) until it places its own limit order to test the market’s reaction to it – and the reaction of other algorithms.
Chapter 2. Order Microstructure
87
The third mode describes the limit order-revision time, that is the time that passes between the deletion of a limit order and the next placement of a limit order for the same stock. Interestingly, the order-revision time reveals many peaks, although they are partly not very pronounced. The most distinct peaks are (besides the usual one at one millisecond) the ones at 50 and 100 milliseconds. With the exceptions of 300 and 800 milliseconds, every multiple of 100 milliseconds up to 1,100 milliseconds is a local maximum which is statistically significant at least at the five per cent level. Two more peaks can be found at 1,500 and 2,000 milliseconds. The limit order lifetime is noise-robust, because we can observe the day-unique order reference number of each order and see what happens with it. The inter-order placement time and the order-revision time, however, are subject to noise, because one market participant can add (delete) a limit order and another market participant may place a limit order independently of the other one. In this case, it would be possible but unlikely that only random values entered the dataset. Without any identifier of the market participant and thus without the origin of the limit orders, it is impossible to remove this noise from the data in these two modes. It is likely that the peaks are the result of some sort of so-called chain order structures, as discussed by Prix et al. (2007, 2008). Because the aim of this chapter was to show limit order structures of algorithmic traders in a modern market, this hypothesis is only an educated guess that receives some support from the peaks in the different order structure research modes. However, when combining this information – the percentage of fleeting orders and the appearance of peaks – it is apparent that algorithmic trading in today’s markets is prevalent and sometimes easily exceeds the 60 per cent that is the common estimate for algorithmic trading on modern limit order markets and very active stocks. This is true at least for very liquid stocks. Because of information asymmetry, wide spreads, and higher risk of detection, it is likely that the share of algorithmic trading decreases with
88
2.4. Conclusion
decreasing limit order activity and liquidity, as the regressions in Section 2.3.2 indicate. In the next chapter, we analyse the limit order dynamics in the ‘atomic’ regions of limit order trading. With a greatly increased timestamp precision, we leave the timeframes where humans may be able to react on new or deleted limit orders and enter the milli- and microsecond environment, where algorithms exclusively encounter other algorithms.
89
Chapter 3
The Nanostructure of Order Dynamics Abstract We analyse order book message data to detect algorithmic traders. Previous papers have usually analysed order book data with a timestamp precision of one hundredth of a second. In times of co-location, those levels of accuracy are not sufficient to see effects of ultra-high-frequency algorithms. The NASDAQ-supplied dataset is equipped with a timestamp precision of a billionth of a second. Thus, we ‘zoom in’ and research the sub-millisecond effects of algorithmic trading on the order book. We find evidence of algorithmic trading with the limit order lifetime, order-revision time, and inter-order placement time. We also apply the proxies separately on ETFs and stocks to see if traders and algorithms treat structured products differently than common stocks.
90
3.1. Introduction
3.1 Introduction Over the last decade, algorithmic trading has become one of the most important driving factors in securities markets. Both market observers and researchers agree that a large part of the traded volume on major stock exchanges such as NASDAQ, Deutsche B¨orse, or the NYSE is traded by algorithms and not by humans. However, the definition of algorithmic traders is still somewhat diffuse. Hendershott et al. (2011, p. 1) define them in a very general but still accurate way as ‘computer algorithms [that] automatically make certain trading decisions, submit orders, and manage those orders after submission.’ The rapid increase in algorithmic trading (see, for example, Figure 1.1 in the case of Deutsche B¨orse) in the last one-and-a-half or so decades has been fuelled by the Regulation National Market System (better known as RegNMS) in the United States and the Markets in Financial Instruments Directive (MiFID) in the European Union. Although the two laws differ in details, they share the main goal to ‘foster fair, competitive, efficient, and integrated equities markets and to encourage financial innovation’ (Storkenmaier and Wagener, 2010, p. 3). Both RegNMS and MiFID have led to the increased market share of ECNs and multilateral trade facilities (MTFs), which have entered the competition with the established stock exchanges and have been able to gain significant parts of overall trade activity. Many ECNs provide an open electronic limit order book. In order to provide a liquid market, they had to find a way to fill their order books. Many algorithmic strategies generate large amounts of limit orders. In synchrony to this liquidity generation, they generate income, making it economically very attractive as well. Consequently, ECNs often encourage algorithmic trading. They invest in fast communication and input/output technology. This enables ECNs to provide very short times to accept, process, and respond to incoming orders, which is usually called (system) latency. For many algorithmic trading strategies, fast reaction times are
Chapter 3. Order Nanostructure
91
a crucial factor; hence, a low latency of the stock exchanges’ matching systems is a key asset. Because they find an appealing environment, many orders from algorithms are routed to ECNs. Established exchanges quickly reacted and shifted their market structure away from quote- to order-driven microstructures and also invested in IT in order to lower their latency times. As a result of this technological arms race, latencies of well below one millisecond (10−3 s) are not uncommon. As we will show in the progress of this chapter, algorithmic strategies work on the level of a few microseconds (10−6 s). In order to analyse their behaviour and the effects they could have on overall trading, datasets with a precision of a hundredth of a second or one millisecond are insufficient to exactly show the effects of algorithmic trading (and especially its subset HFT) on order books. Low timestamp precision causes too much information loss. In this chapter, we use a dataset from the NASDAQ stock exchange with a timestamp precision of one nanosecond (10−9 s). This enables us to accurately analyse high-frequency trading effects on the order book and thus the market environment that everybody is trading in. It is currently being discussed, however, how large the share of algorithmic trading really is. Brokerage firms, proprietary traders, and other financial institutions that implement algorithms do not have to publish their implementation of algorithmic trading strategies and are very secretive about them. Thus, because usually no reliable data exists, market observers and researchers have to rely on estimations on the share of algorithmic trading. The numbers are rather diverse. For example, Mary Schapiro, SEC chairwoman, sees the share of algorithmic trading relative to market volume at ‘50 per cent or more’ in 2010; Senator Kaufman (Democrats, Delaware) states that the market volume is 70 per cent algorithmic (Muthuswamy et al., 2011, p. 87). For Deutsche B¨orse’s Xetra system, more reliable data exists because of its ATP, which was discontinued in 2009. Market participants who implement algorithms could sign up for it and save order fees. For detailed descriptions and analyses see, for example, Hendershott and Riordan (2009), Gsell (2009), Groth (2009), or
92
3.1. Introduction
Maurer and Sch¨afer (2011). There, the share of algorithmic trading floats around 50 per cent. The analysis of algorithmic trade activity is important, because it is structurally different than human trade behaviour. With the rise of algorithmic trading, the whole market structure changes. For example, Hasbrouck and Saar (2009) state that the traditional interpretation of limit order traders as ‘patient providers of liquidity’ (Handa and Schwartz, 1996) has to be re-evaluated if limit order lifetimes decrease to fractions of seconds. For market participants, there are neither timely nor accurate statistics on algorithmic trading. However, traders may have reasons for the desire to know about the extent of algorithmic trading on ‘their’ market. The lack of accurate data on algorithmic trading prevents us from developing an easy-to-calculate method to accurately measure algorithmic trading. Nonetheless, we try to extract information on the approximate extent of algorithmic trading from raw order book message data that does not contain any information on the source of the messages. Currently, algorithmic trade activity is not directly measurable. With this chapter, we show the order submission and deletion activity of algorithms and help find a way to create a measure to estimate the extent of algorithmic trade activity. Trade activity can be analysed by more than just one measure. Market participants who work intraday to trade on small price movements have to constantly revise their open positions and orders. Therefore, we analyse the structure of order strategies of algorithms with three proxies. We choose limit order lifetime, order-revision time, and inter-order placement time to catch actively and rapidly trading algorithms. In the course of this chapter, we often observe very short values for the proxies. We conclude that algorithms possibly have a built-in risk assessment for their limit orders, which we call the limit order risk function. This strictly concave function reflects the probability that a limit order does not optimally fit the current market. Over time, market factors change, which results in changed optimal limit order properties. When the limit order risk function achieves a pre-defined value, the algorithm deletes the old limit order and inserts a new one with
Chapter 3. Order Nanostructure
93
now-optimal order properties. With this concept, we can partly explain the structure of the proxies and the difference between ETFs and common stocks. The chapter is structured as follows: Section 3.2 provides a short literature review. Section 3.3 describes the data for the empirical analysis. Section 3.4 provides the results of the empirical analysis. In Section 3.4.1, we analyse the structure of limit order lifetimes of ETFs and NASDAQlisted common stocks, i.e., the time that passes between the insertion of a limit order and its deletion. In Section 3.4.2, we examine limit order revision times, i.e., the time that passes between a deletion of a limit order until the next placement for the same security. In Section 3.4.3, we analyse interorder placement times, i.e., the time that passes between two placements of limit orders. For each of these proxies, we perform various analyses and use data with a timestamp precision of less than one microsecond, which enables us to very accurately research sub-millisecond observations. Section 3.5 concludes the chapter.
3.2 Literature Review Does it make a difference if market volume is generated by algorithms instead of humans? How does this change the market environment? These questions are causing an on-going debate in the literature. Because the field is quite new, literature is still relatively scarce, but it is growing rapidly. This chapter is mainly influenced by Hendershott et al. (2011), Prix et al. (2007, 2008), Hasbrouck and Saar (2001, 2009), Brusco and Gava (2006), and the idea of a limit order risk function indirectly by Large (2004). Other papers are discussed in Chapter 1. As a by-product of their paper, Hasbrouck and Saar (2001) discover that 27.7 per cent of the limit orders in their sample are deleted within two seconds. They give those orders the attribute ‘fleeting.’ Fleeting orders have disrupted the traditional view of limit orders mainly as a tool
94
3.2. Literature Review
for patient liquidity providers, one of the core assumptions of order-driven markets. Traders who place limit orders today are not at all patient. They obviously try to limit the two types of risk of limit orders that Handa and Schwartz (1996) describe: adverse execution and non-execution. In their 2002 paper, Hasbrouck and Saar disregard fleeting orders. These would have biased the original results, so they delete them from their dataset. Instead, they dedicate a complete paper to the analysis of fleeting orders. Hasbrouck and Saar (2009, p. 154) develop three hypotheses for the existence of fleeting orders: (i) traders try to place limit orders close to or at the best bid/offer. As soon as the price moves away from the limit order, the traders adjust their limit orders, possibly forcing them to adjust limit orders very often, which leads to fleeting orders. Hasbrouck and Saar (2009) call this the ‘chasing hypothesis.’ (ii) Traders delete the limit order to switch to a market order instead. (iii) Traders use short-lived limit orders to search for hidden orders within the bid/offer spread, which is a little like playing battleships. If the limit order does not hit an invisible order, the trader deletes the order and places a limit order with another price tag. Generally speaking, the possibility to place fleeting orders transforms the placement of a limit order from a tool providing liquidity into one demanding liquidity. One of the earliest papers to analyse the effects of algorithmic trading is Hendershott et al. (2011). They use the electronic message traffic of the NYSE as a proxy for algorithmic trade activity. They analyse the effect of the start of autoquote in 2003 – the NYSE’s automatic quote dissemination – on the amount of algorithmic trading. For large-caps, they find that quoted and effective spreads become smaller after the introduction of autoquote. This indicates an improving liquidity, thanks to the easier market access for algorithmic traders. Prix et al. (2007, 2008) find patterns in the kernel density estimations of limit order lifetimes. They measure the lifetime of limit orders, i.e., the time that passes between the placement of a limit order and its deletion. Prix et al. (2007) examine two datasets generated by Deutsche B¨orse’s Xe-
Chapter 3. Order Nanostructure
95
tra system. Their data comprise one week of limit order book data from December 2004 and one week from January 2005. Each week has generated around five million entries. The dataset has a precision of 1/100 of a second, whereas Xetra’s matching algorithms run with a precision of 1/1,000 of a second. Protocol entries can be any of order insertion, order modification, cancellation, complete fill, partial fill, automatic deletion, and technical insertion or deletion. However, their interest concentrates on order insertion and cancellation. They use the dataset from December 2004 to analyse the data empirically and test it out-of-sample with the January 2005 data to see if the findings hold. They find surprising peaks in the kernel density estimates of the nofill-deletion orders (Prix et al., 2007, pp. 728 et seq.). Significantly, more orders are cancelled after one, two, five, 30, 60, 120 and 180 seconds after their insertion than at other times. This is most likely an effect of algorithmic trade engines that are programmed to delete an order after those intervals passed without execution. Specifically, they identify one trade strategy that produces almost exclusively limit orders with lifetimes of multiples of 60 seconds. They call these orders ‘constant initial cushion’-orders (CIC orders). In this order strategy, traders place a bid and/or an offer at time t0 with a few ticks distance (the ‘cushion’) between the limit price and the current best bid or best offer. If the order has not been executed at a predetermined time t0 + x60 , x60 > 0 ∈ {60, 120, 180, . . . }, the trader deletes the order and places a new one following the same strategy. A possible reasoning behind this strategy is to trade at better prices than BBO when larger orders arrive that consume more liquidity than is offered by the first few ticks around BBO. Of course, the deeper the order book, the less this strategy leads to executions. Prix et al. (2008) explore the order cancellation-reinsertion structure. Again, the authors employ two weeks of data, this time one week from December 2006 and one week from January 2007, which also comes from Deutsche B¨orse’s Xetra system. The properties of the dataset are broadly the same as the 2004/05 dataset of Prix et al. (2007), with around six mil-
96
3.2. Literature Review
lion protocol entries per week and a timestamp precision of ten milliseconds. They first analyse the structure of order reinsertions and conclude that the order structure [Delete → Add] is widely used as an order modification tool. This is a little surprising, because there exists a built-in feature in the Xetra system for order modification without the need to manually delete and reinsert it. In addition, they find that roughly 50 per cent of all orders is part of chain structures that cause the non-random order revisions. In addition to order revisions, Prix et al. (2008) analyse how the limit order lifetime structure has changed within the two years between the end of 2004 and the end of 2006. They find additional peaks at shorter lifetimes, specifically at multiples of 250 milliseconds. The lifetimes of orders that could be identified as part of chain-structure strategies are much different from the ones that could not. It is noticeable that the peaks at short lifetimes of under one second are much more striking for CIC orders than for the orders that could not be identified as part of a CIC strategy. Therefore, chain structures are clearly part of an algorithmic trade strategy, and computers can trade very quickly – as will also be shown in Section 3.4. Brusco and Gava (2006) analyse data from the Spanish Stock Exchange. The dataset contains three months of data (July to September 2000) from 34 stocks. The stocks are the ones that belong to the Spanish stock index IBEX minus Zeltia. The data reveals that fleeting limit order activity is positively correlated with volatility, spread, trading activity, and the prior submission of market orders. Surprisingly, the depth of the limit order book does not prove to be important, which would make the CIC orders found by Prix et al. (2007) seem rather odd. However, in order to achieve a sufficient number of observations, Brusco and Gava (2006) redefine the maximum lifetime for fleeting orders. In their paper, an order counts as fleeting if its lifetime is smaller than ten seconds. Therefore, the term ‘fleeting’ as well as the results of their paper have to be taken with a pinch of salt. Nonetheless, the determinants that they find are reasonable. The initial thoughts on a limit order risk function are partly inspired by Large (2004), who develops a limit order utility function that can help to
Chapter 3. Order Nanostructure
97
explain the very short-lived limit orders discovered by Hasbrouck and Saar (2001). The utility of a limit order increases as the intensity of the Poisson process that describes limit order flow increases. In his basic model, traders do not cancel their limit orders because their optimisation problem is invariant in time. In a relaxation of the assumptions, traders may place a limit order in order to see how quickly it moves forward in the price-time queue. If there are not many trading partners in the market (i.e., the Poisson intensity is low), they cancel their limit order and place a market order on the opposite side of the order book, possibly resulting in fleeting orders.
3.3 Dataset The analysis of limit order book data usually employs data with a precision level of a hundredth of a second or milliseconds, i.e., 1/100 or 1/1,000 of a second, respectively. For example, to search patterns in Xetra limit order book data, Prix et al. (2007, 2008) use limit order book data with a timestamp precision of 1/100 of a second, whereas the clock of Xetra runs at 1/1,000 of a second. We use data generously provided by the NASDAQ stock exchange. It contains all entries into the order book, so it is comparable to the one Prix et al. (2007, 2008) use. It is a protocol of the order book and every activity is stored – order insertions, deletions, partial cancellations, executions, etc. In early 2010, NASDAQ’s ITCH-data’s timestamp precision received an update from milliseconds to nanoseconds, or 1/1,000,000,000 of a second. This enables us to perform a search for order structure patterns at levels that, to our knowledge, have not been attempted before due to the recent increase in accuracy. This increase in precision is more than welcome because one of the top priorities of algorithmic and especially HFT is speed. For example, because light moves at a speed of 299,792.458 km/s, a signal carrying order information from a brokerage firm in, say, Chicago, to the NASDAQ stock
98
3.3. Dataset
exchange in New York would need at least 3.87 milliseconds or 0.00387 seconds. Because the speed of light in fibre or electric signals in wires is significantly slower, these numbers are minimal values, and the actual transmission time will be much longer. In addition to this, the signal from the broker would need additional time to pass routers and other network technology, increasing the brokerage firm’s latency to levels well beyond four milliseconds. If the trading strategy of the Chicago-based brokerage firm was based on ultra-high frequency models, this latency could result in significant disadvantages compared to brokerage firms located next to the exchange. Indeed, many algorithmic trading firms use the co-location service, where their engines operate from servers only a few metres away from the exchange’s servers. In this chapter, we analyse the limit order protocol from 22 February to 26 February 2010, i.e., five trading days. Within this time, 131,701,300 orders have been placed and 125,489,818 limit orders were deleted. That means approximately 95 per cent of the added limit orders were deleted. This figure is only approximate because some orders from the week before were deleted in this week and some limit orders that were added were not deleted until Friday’s market close. We expect the error rate to be small, though. The number of deletions of last week’s limit orders and insertion of limit orders that stay alive over the weekend should approximately equal out each other. The dataset contains ETFs, stocks listed on NASDAQ, and stocks listed on the NYSE. We choose 36 stocks that are listed on NASDAQ with the highest limit order activity and matched them with 36 ETFs that have a similar number of added orders to keep the results for the different, comparable securities. The list of stocks and ETFs used for this analysis is given in Table 3.1.
IYR
SPXU IAU VWO
SMDD DUG EDC
SH TLT
IXG
UPRO USD
2 3 4
5 6 7
8 9
10
11 12
Ticker Symbol
1
No.
ETF ETF
ETF
ETF ETF
ETF ETF ETF
ETF ETF ETF
ETF
Type
170,674 149,717
183,516
189,223 184,564
229,215 224,297 210,257
302,766 278,115 243,793
313,998
Limit Orders
Continued on next page
iShares Dow-Jones US Real Estate Index Fund ProShares UltraPro Short S&P 500 iShares Gold Trust Vanguard MSCI Emerging Markets ETF ProShares UltraPro Short MidCap400 ProShares UltraShort Oil & Gas Direxion Daily Emerging Markets Bull 3X Shares ProShares Short S&P500 iShares Barclays 20+ Year Treasury Bond Fund iShares S&P Global Financials Sector Index Fund ProShares UltraPro S&P 500 ProShares Ultra Semiconductors
Company Name
Table 3.1: Ticker symbol, type, name, and number of limit orders of the stocks and ETFs used in the empirical analysis.
Chapter 3. Order Nanostructure 99
Ticker Symbol
XHB TBT
IEO
SMH SLV XOP
FPL KRE XLK TWM DAI (now DUST)
EWC ERY XLI OEF
No.
13 14
15
16 17 18
19 20 21 22 23
24 25 26 27
ETF ETF ETF ETF
ETF ETF ETF ETF ETF
ETF ETF ETF
ETF
ETF ETF
Type
125,131 121,007 120,967 119,704
138,815 132,532 132,232 131,086 129,203
142,775 14,2620 141,408
146,481
147,519 146,791
Limit Orders
Continued on next page
SPDR S&P Homebuilders ETF ProShares UltraShort 20+ Year Treasury iShares Dow-Jones US Oil & Gas Exploration & Production Index Fund Semiconductor HOLDRs Trust iShares Silver Trust SPDR S&P Oil & Gas Exploration & Production ETF Futura Polyesters Ltd SPDR KBW Regional Banking ETF Technology Select Sector SPDR Fund ProShares UltraShort Russell2000 Direxion Gold Miners Bull 2X Shares iShares MSCI Canada Index Fund Direxion Daily Energy Bear 3X Shares Industrial Select Sector SPDR Fund iShares S&P 100 Index Fund
Company Name
Table 3.1 – continued from previous page
100
3.3. Dataset
Ticker Symbol
EPP
ICF
XLV TRA IWF DOG EEV
LHB
AGQ
QCOM INTC CENX BRCM STLD MSFT
No.
28
29
30 31 32 33 34
35
36
1 2 3 4 5 6
NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ
ETF
ETF
ETF ETF ETF ETF ETF
ETF
ETF
Type
QUALCOMM Inc Intel Corp Century Aluminum Co Broadcom Corp Steel Dynamics Inc Microsoft Corp
314,698 307,902 283,047 244,583 229,428 225,705
102,882
104,625
114,838 112,987 111,749 110,545 107,322
116,879
117,567
Limit Orders
Continued on next page
iShares MSCI Pacific ex-Japan Index Fund iShares Cohen & Steers Realty Majors Index Fund Health Care Select Sector SPDR Fund Terra Industries Inc iShares Russell 1000 Growth Index Fd. ProShares Short Dow30 ProShares UltraShort MSCI Emerging Markets Direxion Daily Latin America Bear 3X Shares ProShares Ultra Silver
Company Name
Table 3.1 – continued from previous page
Chapter 3. Order Nanostructure 101
STX BRCD
CSCO WFMI (now WFM) ORCL INTU NTAP CTXS CTSH
SNDK UAUA (now UAL)
MRVL NIHD ASML DTV MU
9 10 11 12 13 14 15
16 17
18 19 20 21 22
Ticker Symbol
7 8
No.
NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ
NASDAQ NASDAQ
NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ
NASDAQ NASDAQ
Type
141,903 138,778 132,701 132,340 131,756
143,250 142,098
184,703 181,518 170,411 149,600 148,801 147,367 146,316
211443 189,830
Limit Orders
Continued on next page
Seagate Technology PLC Brocade Communications Systems Inc Cisco Systems Inc Whole Foods Market Inc Oracle Corp Intuit Inc NetApp Inc Citrix Systems Inc Cognizant Technology Solutions Corp SanDisk Corp United Continental Holdings Inc Marvell Technology Group Ltd NII Holdings Inc ASML Holding NV DIRECTV Micron Technology Inc
Company Name
Table 3.1 – continued from previous page
102
3.3. Dataset
Ticker Symbol
XLNX GILD LLTC APOL TEVA MCHP PTEN NVDA KLAC AMLN CMCSA HCBK LBTYA SBUX
No.
23 24 25 26 27 28 29 30 31 32 33 34 35 36
NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ NASDAQ
Type Xilinx Inc Gilead Sciences Inc Linear Technology Corp Apollo Group Inc Teva Pharmaceutical Industries Ltd Microchip Technology Inc Patterson-UTI Energy Inc NVIDIA Corp KLA-Tencor Corp Amylin Pharmaceuticals Inc Comcast Corp Hudson City Bancorp Inc Liberty Global Inc Starbucks Corp
Company Name
Table 3.1 – continued from previous page 130,376 126,173 122,328 120,823 119,679 118,128 116,396 113,321 112,373 111,876 111,031 104,100 104,095 102,938
Limit Orders
Chapter 3. Order Nanostructure 103
3.4. Empirical Results
104
The structure of the data enables us to measure limit order lifetimes without any noise. Each new limit order that arrives on the NASDAQ receives a day-unique order reference number, and each change or deletion of it is marked with it. That means that once a limit order is placed, its lifetime can be determined by looking for its deletion time. The lifetime is then L = td − tp , where L is the lifetime, td the deletion time and tp the placement time. The other two modes are subject to noise. ITCH data is anonymous and does not carry any identifier of the market participant that places the limit order. Thus, we cannot detect every structured approach in these two proxies, because any other market participant can add a limit order before the one who we are looking at does. However, because of the high speed of algorithmic trading, high frequency limit order strategies indeed leave visible footprints.
3.4
Empirical Results
As we have shown in Chapter 2, the majority of limit orders of highly liquid stocks are often deleted within a matter of a few milliseconds. The aim of that chapter was to show the basic structure of the order dynamics within the time frame of fleeting orders as defined by Hasbrouck and Saar (2001, 2009). In Chapter 2, we use a dataset with a timestamp precision of a millisecond, which is sufficient for the microstructure of order dynamics. As a consequence, all limit orders with lifetimes, inter-order placement times, and order-revision times of under one millisecond were shown as one millisecond. In 2010, however, NASDAQ’s timestamp precision level was increased to nanoseconds. This improvement enables us to ‘zoom in’ and see what really happens in the atomic regions of limit order data. As in Chapter 2, we will perform a limit order book analysis based on three different modes: first, we look at the limit order lifetime, which measures the time between the placement of a limit order and its deletion.
Chapter 3. Order Nanostructure
105
Second, we analyse the limit order revision time. We measure the time that passes between the deletion of a limit order until the next placement of a limit order. The third mode is the inter-order placement time. This is the time the order book for a specific stock does not receive new limit orders. Because we were interested in the ‘algorithmic macro level’ using millisecond timestamps in Chapter 2, there was no need nor intention to ‘zoom in’ the few-millisecond lifetimes. After a short recapitulation of the evidence in the macro-level of order lifetimes, we examine the regions that are obviously the most interesting for many algorithms: the sub-millisecond lifetimes.
3.4.1 Limit Order Lifetimes Limit order lifetimes are fundamentally non-normally distributed. Rather, they can be described with the Weibull distribution, which defines a survival probability S of
P(T > t) = S(t) = exp(− exp(β0)tp),
(3.1)
where β0 is the scale parameter and p is the shape parameter (Cleves et al., 2002, p. 212). The estimated β0 in our dataset is constantly smaller than one and greater than zero, and p averages around 0.31, which yields a hyperbolic distribution. Many limit orders are deleted before their execution within a handful of milliseconds. The more limit order placement activity there is for an equity, the shorter the limit order lifetime becomes. For the most active stocks or ETFs, it is not uncommon that more than 80 per cent of limit orders are deleted without execution within less than one second. Following Prix et al. (2007, 2008), we look for irregularities in the densities of limit order lifetimes. Whereas Prix et al. (2007, 2008) find multiple peaks in the kernel density estimations of their Xetra datasets at 250 milliseconds, two seconds and multiples of 30 seconds, we only find one peak at 100 milliseconds in our NASDAQ dataset (see Chapter 2).
106
3.4. Empirical Results
While some stocks or ETFs show an additional peak at one second, only the peak at 100 milliseconds is statistically significant (see Figure 3.2 and Table 2.3). Many limit orders, as already mentioned, are deleted after one or two milliseconds. With a dataset with an unmatched timestamp precision, we can examine the ‘nanostructure’ of today’s stock markets and capture ultrahigh-frequency algorithmic trade activity. The result is surprising. We only find one significant peak at the micro level with millisecond timestamps. With the more exact timestamps, it becomes clear that the entire density of limit order lifetimes consists of peaks, that are invisible at a lower ‘resolution.’ The ultra-short lifetimes show peaks at multiples of 50 microseconds. Figure 3.1 illustrates this with four exemplary histograms, which depict two NASDAQ-listed stocks, and two ETFs. The individual assets were chosen randomly, and the day presented is 22 February. The histograms show the frequency of limit order lifetimes in the interval of [0, 2] milliseconds. The solid line shows the cumulated share of limit orders deleted until time t relative to the total amount of limit orders for the security on the day. For example, observe that more than 15 per cent of the limit orders placed during the day on CENX were deleted within two milliseconds. Figure 3.2 shows the cumulative average share of limit orders that have been deleted within n seconds. The share is calculated relative to the total number of limit orders that have been inserted on the same day. The figure shows two curves: one shows the average limit order lifetime of 36 ETFs, the other one that of 36 NASDAQ stocks covering five trading days, i.e., one trading week. The main underlying function – disregarding the jump at 100 milliseconds – is concave and approaches unity. On average, limit orders are deleted within a very short time. In the sample of five trading days, more than 50 per cent of inserted limit orders have been deleted within approximately 0.6 seconds. It is noticeable that the graphs of the stocks and ETFs differ a little. The basic structure of
Chapter 3. Order Nanostructure
.001 CENX
.0015
.002
.06
8
.04 Share .02 0
2 0
Frequency 4 6
.01 .2
2 0
.1 .15 Share .05 0
Frequency 50 100 0
.001 VWO
.0015
.002
0
.0005
.001 INTC
.0015
.002
.06
.0005
.0005
.04 Share
0
0
.02
.002
0
.0015
10
.001 QQQQ
Frequency 4 6 8
.0005
150
0
0
0
.02 .03 Share
Frequency 20 40
.04
60
.05
107
Figure 3.1: Exemplary histograms of order lifetimes. Each bar represents a timeframe of 100 nanoseconds from 0 to two milliseconds. Depicted are the two ETFs QQQQ (Powershares QQQ, 125,680 observations) and VWO (Vanguard Emerging Markets ETF, 39,121) and two stocks listed on NASDAQ: CENX (Century Aluminum Co, 57,783), and INTC (Intel Corp, 48,977). Solid line: cumulated share of limit order lifetime relative to the total amount of limit orders on the day in per cent.
limit order deletion times is in both cases a concave function with a jump at 100 milliseconds. Stocks, however, tend to have a greater proportion of limit orders with a lifetime of below 100 milliseconds, but the slope of the function of limit order lifetimes decreases more rapidly for stocks than for ETFs. The functions in Figure 3.2 can possibly be regarded as limit order risk functions f (·). The probability that a limit order does not optimally fit market conditions increases over time, ∂f (·)/∂t ≥ 0. To fit the limit orders
3.4. Empirical Results
0
.2
Share
.4
.6
108
0
.2
.4 Lifetime in Seconds
.6
.8
Figure 3.2: Cumulative average limit order lifetimes. Solid line: NASDAQ listed stocks, dashed line: ETFs. X-axis: limit order lifetime in seconds. Y-axis: share of limit orders thathave been added and deleted on the same day.
to the market, traders constantly have to adjust price and/or quantity tags in order not to be exposed to the two main risks of limit orders: non-execution risk and undesired execution risk (Handa and Schwartz, 1996). Every order starts with a risk of non-optimality greater than zero, because no model can perfectly reflect reality, nor can it predict the future accurately. No model, however complex, can say with absolute certainty that the properties of any limit order are impartially optimal, because models are always less complex than reality. At any point in time, some external factors change and the trading system deletes the limit order in favour of a new one. This possibly leads to the clustering of observations at values close to zero in all the proxies. This ‘baseline risk’ is reflected by the rapid
Chapter 3. Order Nanostructure
109
increase of the function especially in the regions close to zero. The more insecure the perception of optimality of the calculated limit order is, the more likely it is that the limit order will be deleted after a very short time. Because algorithms do not mind acting within microseconds or any other speed that hardware and software allow, this risk level can be adjusted with almost infinite precision. There exists a wide range of possible factors that influence the suitability of limit orders: bid and offer price, order book depth, order book change rate, ad hoc news, position of the order in the order book, (implied) volatility, each of these factors for some benchmark or a correlated asset, and many more. The value of the function and perhaps even the function itself changes in time, and they can change rather rapidly. The more limit orders there are, the quicker market conditions change and limit orders have to be adjusted more rapidly. Obviously, with a growing share of high-frequency trading engines on the market, this results in a self-nurturing process, because algorithms change the very factors they observe. We now turn to the scale parameter β0 of the Weibull function and regress it against the log of the number of limit orders. This univariate regression shows that the more limit order activity there is for a security, the shorter-lived limit orders become. As mentioned earlier, the Weibull distribution fits the survival probabilities of limit orders remarkably well. However, the peak cannot be replicated by the smooth parameterised curve and distorts the parameter estimation. Hence, we left lifetimes of around 100 milliseconds out of consideration. Specifically, we ruled out lifetimes if .095s ≤ Lt ≤ 0.105s. We choose this range because the peak is not only at exactly 100 milliseconds but at some lifetimes t around it, probably due to effects of the IT infrastructure. We made an individual regression for ETFs and common stocks. The estimation results are shown in Table 3.2.
Est. Result -3.182 0.280 -2.776 0.246
Coeff.
α β
α β
ETF
NASDAQ
0.560 0.056
0.530 0.052
Std. Err.
-3.880, -1.671 0.138, 0.354
-4.229, -2.135 0.177, 0.382
95% Conf. Int.
175
175
N
.099
.138
Adj. R2
.317
.321
Avg. p
Table 3.2: Regression results of y = α + βx + ϵ, where the scale parameter β0 is the dependent variable and the natural logarithm of the number of limit orders that have been added and deleted on the same day is the independent variable.
110
3.4. Empirical Results
Chapter 3. Order Nanostructure
111
The two types of assets are, therefore, broadly comparable in terms of limit order lifetime changes in relation to limit order activity. The constant is a little smaller for stocks than for ETFs, but the slope of the regression curve is less steep. In both cases, however, the slope is positive. This means that the more limit orders arrive at the stock exchange, the higher the probability of very short-lived limit orders with a lifetime of only a handful milliseconds becomes. With the addition of a dummy variable δ, which is 1 when t ≥ 0.1s and 0 otherwise, the Weibull distribution fits the actual data best. The fitted Weibull curve function is then
P(T > t) = S(t) = exp(− exp(β0)tp) + δd, where d is the value for the jump in the deletion probability. The value for d is on average 0.045 for ETFs and 0.058 for NASDAQ stocks. This indicates that, on average, around five per cent of all inserted limit orders without execution are deleted after precisely 100 milliseconds. Limit order lifetimes over a trading day usually show a distinct break between market hours and non-market hours. Active trading algorithms that generate many short-lived limit orders and place and remove limit orders are most active during market hours when the limit order activity is highest. Figure 3.3 shows the results of an aggregation of limit order lifetimes and number of limit orders over the five trading days in February 2010. The figures were created by splitting the trading day into five-second intervals. While traders place limit orders for both ETFs and NASDAQ-listed stocks in pre-market hours, the data only contain limit orders for ETFs in after-market hours. In addition to the different average limit order lifetimes during market and non-market hours, the lifetimes during market hours show a rough reverse smile. The lifetimes shortly after market opening and before market close are shorter than at noon. The lower two figures show the total amount of limit orders arriving in each five-second interval. They show the well-known smile effect with more limit order activity at
3.4. Empirical Results
112
0
0
Average Lifetime in Seconds 1000 2000 3000 4000 5000
Average Lifetime in Seconds 1000 2000 3000
the beginning and at the end of the trading day than around noon (see for example, Jain and Joh (1988), Foster and Viswanathan (1993), or Biais et al. (1995)). This supports the hypothesis that algorithmic trade strategies are most active when many limit orders are in the market, perhaps in order to hide themselves from sniffer algorithms that seek to reverse-engineer other algorithms’ strategies to frontrun them. Figure A.1 shows the average lifetimes during market hours, i.e., without the timeframes of 8:00 AM to 9:30 AM and later than 4:00 PM. Even though the lifetimes are very volatile from interval to interval, the reverse smile is clearly visible.
2000 4000 Interval Number
6000
0
2000 4000 Interval Number
6000
0
2000
4000 Interval Number
6000
8000
0
2000
4000 Interval Number
6000
8000
0
0
Number of Limit Orders 2000 4000 6000
Number of Limit Orders 2000 4000 6000
8000
8000
0
Figure 3.3: Top: average limit order lifetimes of limit orders for NASDAQlisted stocks (left) and ETFs (right). Bottom: number of limit orders for the same period and assets. The x-axis represents the number of the fivesecond intervals, starting at 0 and representing the interval 08:00.00 08:00.04 AM, when the market opens. The dashed vertical lines show the beginning of market hours (09:30.00 AM, or the 1,080th interval, and 04:00.00 PM, or the 5,760th interval).
Chapter 3. Order Nanostructure
113
This concave pattern of average limit order lifetimes over the trading day fits smoothly into the framework of a limit order risk function. With a rapidly changing order book structure, it becomes more likely that a limit order placed at time t0 becomes non-optimal for the market at some time t0 + x. At the beginning and at the end of a trading day, trading activity is commonly at its highest level over the day. After market opening, market participants trade to find a consensus on the fair price of an asset and process the information of the night and from other markets. Before market close, traders often close their position in order to avoid overnight risk or take the position their portfolio manager has ordered. Of course, this leads to a lot of trading action, with many order insertions and -deletions. These activities increase the slope of the limit order risk function for lifetimes close to zero. Fast markets with a lot of volatility should prove to be a good environment to test the concept of a limit order risk function as well as analyse order dynamics in turbulent conditions. Especially interesting is the socalled ‘flash crash’ of 6 May 2010, in which HFT was involved; see, for example CFTC and SEC (2010, pp. 45–57). By investigating the trading day using the limit order lifetime proxy, we receive a further indication that it is likely that the proxy correlates with the intensity of algorithmic trading. As Figure 3.4 shows, the average lifetime of limit orders on 6 May behaves as the average limit order lifetimes of the February lifetimes, only with a more elevated variance of the lifetimes over time than in Figure 3.3 or Figure A.1. At the time of the flash crash, the average lifetime of limit orders plummets from values ranging from around two to thirteen seconds to average lifetimes of around one second and lower. This indicates that during the flash crash, the share of algorithmic trading increased, because humans are unlikely to systematically insert and delete orders within much less than a second or so. In distressed markets, there are two possibilities for algorithmic traders: the first possibility is to ‘pull the plug’ because the algorithm’s model de-
3.4. Empirical Results
Average Lifetime in Seconds 20 40 60 80 0
Average Lifetime in Seconds 5 10 15
2000
3000 4000 Interval Number
5000
6000
1000
2000
3000 4000 Interval Number
5000
6000
1000
2000
3000 4000 Interval Number
5000
6000
1000
2000
3000 4000 Interval Number
5000
6000
0
Average Lifetime in Seconds 100 200 300 400
Average Lifetime in Seconds 50 100 150 200 250
1000
0
NASDAQ
0
ETF
114
Buy Orders
Sell Orders
Figure 3.4: Upper graphs: average Lifetimes of limit buy (left) and sell (right) orders on ETFs. Lower graphs: average lifetimes of buy (left) and sell (right) orders of NASDAQ-listed stocks. Five-second-intervals, starting at 9:30 AM (interval number 1080). All graphs show the time 09:30 AM to 04:00 PM, i.e., the regular trading hours, of 6 May 2010. The dashed line highlights 2:45.30 (or interval number 4,866), when the Dow-Jones index reached its minimum on that day with a little more than nine per cent losses for a short period of time. pends on ‘normal’ markets. Because the market changes very rapidly, the model does not yield limit orders with risk levels below the threshold. This causes the algorithm to temporarily halt trading. The second possibility is the opposite: to trade with a higher frequency because the market changes more rapidly. The faster the market changes, the faster increases the risk that the limit order does not fit to the market and needs to be replaced. This is the case if the algorithm’s risk level for new orders is still below the risk threshold that would prevent it from trading.
Chapter 3. Order Nanostructure
115
As is visible in all lifetime graphs in Figure 3.4, the average lifetime decreases and reaches its minimum at around 2:45 PM, when the DowJones hit the minimum value of the day with a minus of approximately nine per cent. This signifies that at the time of the market turmoil, algorithms were very active, placing and deleting limit orders rapidly. It would be too much, however, to draw any definitive conclusion from this indicator. The different structures of limit order lifetimes for ETFs and common stocks enable us to create the concept of limit order risk functions as perceived by market participants. If a trader deletes a limit order quickly after its placement, he or she gives up the time priority in exchange for not being exposed to the two main risks of limit orders, non-execution and adverse execution. In the next section, we will analyse the proxy order-revision time, which shows how long the trader waits after a deletion before he or she places a new order for the same stock.
3.4.2 Order Revision Times Order revisions are a fundamental feature of the price formation process on order-driven stock markets. With the adjustment of limit order properties, traders can adjust their perception of the supply/demand curves for securities and contribute to the price-formation process. Even though most stock exchanges provide a built-in routine for replacing limit orders, it seems common to manually delete limit orders and manually insert a new one. Because limit order revisions are a fundamental part of active trading, it serves as the second proxy. Human traders and computers alike constantly compute their optimal limit orders – according to their market perception. That means that the time between the deletion of a non-optimal limit order and the next insertion of a limit order with adjusted properties can be very small or even zero. The revision time depends on the trader’s decision to wait or not to wait for a market reaction on the deletion of the limit order. However, with
116
3.4. Empirical Results
a timestamp precision of a nanosecond, it is theoretically possible but very unlikely to observe revision times of zero: the new order would have to be placed with less than a nanosecond delay, and the network technology would have to be perfectly constant, both of which is rather unlikely. The slope of the limit order risk function for small values of t is usually very steep. To keep non-optimality risk low, traders adjust their limit orders very quickly in order to adapt to changing market conditions. In Chapter 2, we have shown that many securities show a distributional peak at 100 milliseconds in the distribution of order-revision times comparable to the one of limit order lifetimes shown in Figure 3.2. Therefore, we expect a similar shape for average cumulated revision times as we find for order lifetimes. We assume it is more likely that a trader keeps trading in the same stock with a higher probability than foregoing it, so revision times should resemble lifetimes. It is possible but unlikely that revision times are often shorter than a few milliseconds through the random placement of different market participants. The information of an order deletion has to arrive at the market participant, which even on very advanced platforms takes some 100 to 150 microseconds or so even for users with their trade algorithms co-located next to the exchange’s servers. Adding the processing time for the nowchanged order book and the time it takes to send a limit order back to the stock exchange, it is likely that at least around 0.3 to 0.5 milliseconds pass. The market participant who deletes the limit order does not have to wait for the information of the deletion (as he or she generates it). As trivial as it sounds, he or she knows about the change of the order book caused by his or her deletion before all other market participants do. Thus, not considering the unlikely coincidental placement or deletion of a limit order by another market participant in such small timeframes, the market participant who is about to delete his or her limit order knows the structure of the limit order book before anyone else does. This enables him or her to calculate the optimal limit order for the then-changed order book. For the deleting market participant, the order-revision time can be infinitesimally small or
Chapter 3. Order Nanostructure
117
even negative, if the new order is routed more quickly than the deletion message. Of course, only algorithms can perform actions in such short intervals. Even though human traders might know exactly the properties of their next limit order they plan to place after they delete their old one, they will not be able to place it in a matter of milliseconds, probably not even in a tenth of a second. Figure 3.5 shows histograms of order-revision times for the same two stocks and two ETFs used in Figure 3.1. The data reveals that very short order-revision times of only a few milliseconds are very common. In this set of two ETFs (VWO and QQQQ) and two stocks (CENX and INTC), order-revision times are lower than or equal to two milliseconds in some 40 per cent of the cases. It is noticeable that similar to limit order lifetimes, limit order revision times tend to occur every 50 microseconds. This explains the step-wise shape of the cumulated average limit order revision share. To verify that the large shares of revision times of under two milliseconds in Figure 3.5 are not just special cases, we take the same securities as we did in Section 3.4.1 and calculate the average cumulated share of order-revision times. Again, the function describes a concave curve and resembles the one we discovered with the limit order lifetime proxy. It is much steeper, which are shown by the exemplary figures in Figure 3.5. The cluster of order-revision times of a few milliseconds may be the effect of the often-made argument that traders seek for liquidity by placing ultra-short-termed limit orders in quick succession. In the context of a limit order risk function, this repeated insertion of limit orders also makes sense for the traders. Whereas the deleted limit order climbed the limit order risk function over time, the trader can delete that limit order and add a new one with adjusted properties to start at a risk of zero that it does not fit the market – at least within the trader’s model framework. Because computers can calculate optimal limit orders continuously, there is no necessity to wait after a deletion to place the new order. In some cases, e.g., if there is uncertainty, it may be advisable to wait a few fractions of a second for
3.4. Empirical Results
.0005
.001 CENX
.0015
.002
.1
.2 .3 Share
.4
Frequency 40 60 80 100
0
20 0 Frequency 100 200 0
.3 Share
.2 .1 0
Frequency 100 150 50 0
.0005
.001 VWO
.0015
.002
0
.0005
.001 INTC
.0015
.002
.4
0
0
.3
.002
.2 Share
.0015
.1
.001 QQQQ
0
.0005
200
0
300
0
0
.1
.2 Share
.3
.4
Frequency 100 200 300 400 500
118
Figure 3.5: Histograms and cumulated proportions of limit order revision times on 22 February 2010 of VWO and QQQQ (ETFs) and CENX and INTC (NASDAQ-listed stocks). In each subfigure, the left y-axis shows the frequency of each limit order revision time (with increments of 0.2 microseconds), and the right y-axis shows the cumulated share of the orderrevision time t with respect to all limit order revisions for the security on the same day.
the market’s reaction to the deletion. However, algorithms do not seem to wait very often, as can be deduced from the distribution of the cumulative average revision times. Figure 3.6 shows average cumulative limit order revision times for NASDAQ stocks and ETFs. It shows that orders for ETFs are revised differently than stocks. ETFs tend to have shorter revision times than common stocks, indicating that algorithms are more active on those structured products than on plain vanilla equity.
Chapter 3. Order Nanostructure
0
.2
.4
.6
.8
119
0
.2
.4 .6 Order Revision Time in Seconds
.8
1
Figure 3.6: Average limit order revision times for ETFs (dashed line) and stocks listed on NASDAQ (solid line). The lines show the average cumulated share of order-revision times t relative to all observations of the message sequence [Delete–Add] per day per security.
The cause is probably the structured nature of ETFs. Market participants – traders, and especially in the case of ETFs, market makers – know the constituents of the ETF and can easily calculate its net asset value (NAV). This makes the risk that the limit order is not optimally suited for the current market, lower than for common stocks, where traders always face the problem that they do not know its fundamental value. Because this knowledge is relatively easily available in the case of ETFs, the only edge traders have is speed, which makes the use of algorithms pivotal. Notice that the curve for stocks shows a small jump at around 0.1 seconds, which does not exist for ETFs. We examined the existence of the peak more extensively in Chapter 2.
3.4. Empirical Results
Average Order Revision Time 0 5 10 15 1000
2000
3000 4000 Interval Number
5000
6000
1000
2000
3000 4000 Interval Number
5000
6000
Average Order Revision Time 0 5 10 15 20
ETF
Nasdaq
120
Figure 3.7: Average order-revision times over the trading day. The x-axis is the interval number; each interval represents five seconds of trading. The figures show the time 9:30 AM to 4:00 PM. Each dataset consists of 36 stocks or ETFs, respectively. The trading week is 22–26 February, i.e., five trading days.
Figure 3.7 shows that order-revision times over the day behave as one would assume. As the order flow decreases around noon as shown in the lower half of Figure 3.3, the average time that passes between the deletion of a limit order and the arrival of a new limit order for the same stock increases. Before market opening, and especially after market close, order revisions become rather scarce, which leads to much longer order-revision times than during market hours. The informational content of non-market hours is only limited, so we do not include them in this figure. To see how active trading changes in distressed markets, we employ the data of 6 May 2010 as for the limit order lifetime. We create several order-
Chapter 3. Order Nanostructure
121
revision time figures with the technique employed in Figure 3.6. According to CFTC and SEC (2010, p. 57), the share of algorithmic trading on total market volume hovered a little over 40 per cent over the day and peaked at 50per cent at 2:45 PM, when the Dow-Jones index hit its minimum. Their figures refer to 17 HFT firms, i.e., algorithmic trading firms. They split up the day into 15-minute intervals, and for each the share of the 17 HFT firms on the overall market is given. They are thus minimum values, as other algorithmic market participants were not taken into the dataset. The cumulative share of limit order revisions on 6 May for four 15-minute intervals is given in Figure 3.8. It becomes apparent that the eagerness to place new orders after the deletion of a previous limit order is greater for ETFs than for NASDAQ stocks. This hints at a greater share of algorithmic price formation for ETFs than for common stocks. At 10:00 AM as well as at 12:00 PM on 6 May, the figures are comparable to a very quiet market, given in Figure 3.6. At the time of the flash crash, however, which happened between around 2:30 and 3:00 PM, the speed of limit order revisions increases rapidly for both limit orders to sell and limit orders to buy. This proxy alone probably does not yield a relative share of algorithmic trading in the market. However, a decrease of limit order revision times indicates a higher proportion of algorithmic trading if the value is adjusted for a faster market, which automatically brings down limit order revision times with the noise it generates. Ultra-fast revision times of only a few milliseconds can possibly serve as an approximate indicator for algorithmic limit order activity. However, without a reference dataset with an indicator for algorithmic orders, it is impossible to construct a measure.
3.4.3 Inter-Order Placement Times Although some research papers analyse inter-trade or inter-transaction durations (e.g., Engle and Russell (1998), Ivanov et al. (2004)), to the best of
3.4. Empirical Results
122
Buy 1 .8 .6 .2
.4
.6 .2
.4
ETF
.8
1
Sell
.4 .6 Seconds
.8
1
0
.2
.4 .6 Seconds
.8
1
.2
.4 .6 Seconds
.8
1
0
.2
.4 .6 Seconds
.8
1
.8 .2
.4
.6
.8 .6 .4 .2
Nasdaq
0
1
.2
1
0
Figure 3.8: Average share of order-revision times t within four 15-minute intervals on 6 May 2010. – – interval from 2:30 PM to 2:45 PM; – · · – interval from 2:45 PM to 3:00 PM; – – interval from 10:00 AM to 10:15 AM; - - - interval from 12:00 AM to 12:15 PM. For example, of all the observed occurrences of the message flow of the form [Delete–Add] that were sell orders on ETFs (upper right figure), in around 80 per cent of the cases, a new order was placed within 0.4 seconds from 12:00 AM to 12:15 PM. our knowledge, there are no scientific papers on the inter-order placement duration, i.e., the time that passes from the placement of one limit order until the next limit order arrives. Limit order insertions in quick succession can have various origins. For example, the much criticised so-called ‘quote-stuffing’ works this way. If one market participant places many limit orders at once for one stock, the algorithms of other market participants have to read and process them, which consumes calculation time. They face a (possibly small) time disad-
Chapter 3. Order Nanostructure
123
vantage. The market participant who placed the limit orders does not have to react to the new limit orders, he or she knew the structure of the limit orders beforehand simply because he or she placed them. Due to the fact that the dataset is anonymous, we cannot say if the limit orders we analyse are part of a quote-stuffing attempt or not. We can, however, say with a high probability that ultra-short inter-order placement times originate in algorithms. Inter-order placement times tend to be very short by nature. If there are many limit order insertions, even without order clustering the time between the arrival of limit order insertions decreases. In addition to that, it is wellknown that liquidity attracts liquidity, causing inter-order placement times to further diminish at times of much limit-order activity. But for the ultrashort inter-order placement times that we observe, this explanation alone does not suffice. Figure 3.9 shows the inter-order placement times of four securities for up to two milliseconds and the cumulated share relative to all placed limit orders on the day (22 February). In the case of QQQQ and INTC, with a probability of 50 per cent a trader would only have to wait two milliseconds after the insertion of a limit order to see the next one to be placed. Perfectly distributed over the trading day with 23,400 seconds, we would expect inter-order placement times distributed around average values of around 0.2 seconds for QQQQ (125,680 orders) or 0.5 seconds for INTC (48,9777 orders). For the securities VWO and CENX, the probability of a new limit order insertion within two milliseconds after the last one lies between 25 and 30 per cent. The quickest inter-order placement time is two microseconds, i.e., 0.000002 seconds – which is clearly not coincidental. The bulk of interorder placement times is placed within less than a millisecond. Note the different shapes of the histograms given in Figure 3.9. Every security shows the clustering around multiples of 1/20,000 of a second. While 3 per cent of all limit orders of CENX, for example, arrive earlier than 0.0002 seconds after the last limit order, it shows a massive clustering of limit orders at 0.00025 and 0.0003 seconds – almost 12 per cent of limit orders are
3.4. Empirical Results
.0015
.002
0
.0005
.001 CENX
.0015
.002
.3
150
.1
.2 Share
Frequency 50 100
0
0
.0005
.001 VWO
.0015
.002
0
.0005
.001 INTC
.0015
.002
.2 .3 Share
Frequency 500 1000
.1 0
0
0
0
.05
200
.1 .15 Share
Frequency 400 600
.2
.25
0
.5
.001 QQQQ
.4
.0005
800
0
1500
0
0
.1
200
.2 .3 Share
Frequency 400 600
.4
.5
800
124
Figure 3.9: Inter-order placement times of QQQQ and VWO (two ETFs) and CENX and INTC (two stocks listed on NASDAQ). The bars show the frequency of inter-order placement times t, the solid line shows the cumulated share of limit orders at t relative to all limit orders placed on that day for the individual security.
placed with exact that speeds. The limit orders of QQQQ, in comparison, come in at a higher rate, around 20 per cent of the limit orders are inserted within 0.0004 seconds. As for the limit order revision time, is not possible to say with certainty that the two successive orders generating such short inter-order placement durations come from the same trader, because the dataset is anonymous and the realised latency of the stock market and the market participants to process the order is unknown. To compare ETFs and common stocks with each other, we calculate the average shares of inter-order placement times t relative to all limit orders of the individual securities. The results are given in Figure 3.10. The
Chapter 3. Order Nanostructure
125
figures show five curves, representing the inter-order placement durations for the intervals (0, 1/10,000), (0, 1/1,000), (0, 1/100), (0, 1/10), and (0, 1) seconds. For example, for the sample of 36 ETFs and five trading days, with a probability of 40 per cent the average duration between two successive limit order insertions is only 0.6 × 1/100s = 0.006 seconds. This indicates that limit orders are clustered. NASDAQ-listed stocks are quite comparable to ETFs regarding this proxy, except the jump at 100ms. Observe that at 3/10,000 of a second or 300 microseconds, there is a peak in the distribution of inter-order placement times, as can be seen from the curve labelled 1/1,000 of a second. In addition to that, the solid line of NASDAQ stocks shows a jump at 0.1 seconds, which is in line with the findings in Chapter 2. The jumps indicate that there are relatively static algorithms in the market that wait 300 microseconds or 100 milliseconds after a limit order has been added and place a new one. The often much shorter inter-order placement durations for more active stocks and ETFs indicate that algorithms take up speed with an increased order flow. This behaviour can also be explained with the concept of a limit order risk function. The more limit orders arrive, the faster increases the risk that a limit order placed in the market some time earlier turns nonoptimal. This leads the algorithm to delete the old order in favour of a new one, leading to decreased order revision- and decreased inter-order placement times.
3.5 Conclusion We analyse raw order book message data for traces of algorithmic trading. From our analysis of the microstructure of order dynamics in time frames that are still perceivable by humans (Chapter 2), we know that a great deal of limit order activity occurs in a few milliseconds’ time. Until recently, the timestamp precision of most datasets ‘only’ reached millisec-
3.5. Conclusion
126
Nasdaq
0
0
.2
.2
.4
.4
.6
.6
.8
.8
ETF
0
.2
.4
.6
.8
1
0
.2
.4
Factor 1/1 sec
1/10 sec
1/100 sec
1/1000 sec1/100 sec
1/10000 sec
.6
.8
1
Factor 1/1 sec
1/10 sec 1/1000 sec
1/10000 sec
Figure 3.10: The lines show the different cumulated probabilities of orderrevision time as being equal to t. Each curve represents different intervals, which can be calculated by multiplying the factor given at the x-axis with the corresponding time scale given in the legend. For example, the solid line (1/1 sec) shows the interval (0, 1) seconds, the dashed line directly below it the interval (0, 1/10) seconds and so forth. For example, with a probability of 80 per cent, a new order arrives within one second after the last insertion for both ETFs and stocks. onds, prohibiting a thorough and detailed analysis of ultra-high frequency algorithms. This chapter aims at helping close this gap. We operate with an order book protocol from the US stock exchange NASDAQ. It logs all order book events, such as limit order insertions, deletions, executions, etc. In order to give traders and other market participants an idea of the way algorithmic trading works on a modern order-driven market, we analyse the structure of limit order lifetimes, limit order revision times, and inter-order placement times. All three proxies show a clus-
Chapter 3. Order Nanostructure
127
tering of observations at a few milliseconds, which makes an analysis of pure algorithmic trading behaviour impossible with timestamp precisions of a millisecond or worse. The dataset has timestamps which are exact to the nanosecond. This enables us to perform analyses at a greater accuracy than ever before, which is necessary to observe algorithms that currently operate at microsecond levels. The limit order lifetime, i.e., the time from insertion to the deletion of a limit order, is the first proxy we analyse. It is also the most exact one; the data does not produce any noise, because every order is equipped with a day-unique order reference number. Many limit orders for common stocks and ETFs are only active for a few milliseconds, often only a few microseconds, before they are deleted. This proxy shows clusterings of observations at multiples of 50 microseconds. Order dynamics differ for ETFs and common stocks. A greater proportion of limit orders for common stocks than for ETFs is deleted within less than 100 milliseconds. During the flash crash on 6 May 2010, the average limit order lifetimes plummeted to very small values both for buy orders and for sell orders. Order revision times, i.e., the time that passes between a deletion of a limit order and the next insertion of a limit order, are very small. Their density resembles the one of limit order lifetimes, they also show peaks at multiples of 50 microseconds. As in the case of limit order lifetimes, revision times for ETFs and for common stocks are different. This becomes apparent in the average cumulated share of revision times; the slope is much steeper for ETFs for values close to zero than for common stocks. During the flash crash on 6 May 2010, revision times for both ETFs and common stocks decreased, indicating that the share of algorithmic trading increased as compared to ‘normal’ markets. The inter-order placement time measures the time that passes between two successive limit order insertions for a stock. A great share of interorder insertion durations is very short. On average, in 50 per cent of the cases a new order is placed within less than 100 milliseconds. This proxy shows the peaks at multiples of 50 microseconds that can be observed for
128
3.5. Conclusion
lifetimes and revision times. For very active stocks or ETFs, this time decreases to two or three milliseconds. The differences between ETFs and stocks are rather negligible in comparison with the two other proxies revision times and lifetimes. As one would expect, inter-order placement times decreased for both asset classes during the flash crash. The structures of the limit order proxies can possibly be explained by a limit order risk function. The function describes the increasing risk of a limit order not to be optimal for the market any more. It is an increasing function that can depend on various factors that influence both the optimal order strategy and tactics, such as, for example, depth of the order book, volatility, implied volatility, depth of correlated securities, and any other factor the trader deems important for an optimal order strategy. The short order-revision times and lifetimes could be a direct result of this. To illustrate this for limit order revision times: if a trader deletes an existing limit order, a newly inserted order restarts at a risk of non-optimality of zero. In the case of order lifetimes: if a previously inserted limit order reaches a threshold level of inappropriateness, the algorithm or trader deletes the order and inserts a new one that is optimal according to the employed model. Algorithms can do both of these things very quickly. They rapidly read and process large amounts of market information. They feed this information to their order calculation models, which act accordingly. The more powerful information and communication technology becomes, the faster the value of the limit order risk function changes, which results in decreasing values for the proxies. At the time of the flash crash, the orderrevision times decreased significantly, because the market itself generated a lot of new information through large price movements. ETFs are treated differently than common stocks. Their limit order lifetimes are on average longer, their revision times shorter, and their interorder placement times are also shorter. This can be explained by the different structure of ETFs compared to common stocks. Because ETFs are usually diversified portfolios with a known inner structure, their fundamental value can relatively easily be calculated via its net asset value. This
Chapter 3. Order Nanostructure
129
lowers the risk that an inserted order is not optimal for the current market conditions. This has a positive effect on limit order lifetimes, because the suitability of the order for the market decreases more slowly, especially in the regions close to zero. For the order-revision time, the effect is negative, because the trader can calculate the optimal order with less uncertainty for ETFs than common stocks, so a new order can be placed with no delay. It goes without saying that this leads to clusters of observations close to zero, making a very accurate timestamp precision necessary. Future research will include modelling and testing a limit order risk function and the connection of the proxies to the structure of the order book. It will be interesting to see if the ultra-short lifetimes, order-revision times or inter-order placement times happen within, at or slightly away from the BBO.
Acknowledgements We are grateful to NASDAQ OMX for generously providing this order message dataset. I would also like to thank Julian Stitz for his outstanding research assistance.
3.A. Appendix
130
800 Average Lifetime in Seconds 400 600 200 0
0
200
Average Lifetime in Seconds 400 600
800
3.A Appendix
1000
2000
3000 4000 Interval Number
5000
6000
1000
2000
3000 4000 Interval Number
5000
6000
Figure A.1: Average limit order lifetimes of limit orders for Nasdaq-listed stocks (left) and ETFs (right). It shows the same data as the upper part of figure 3.3, but without the pre- and post-market hours.
131
Chapter 4
Conclusion My thesis analyses order dynamics caused by algorithmic trading engines at the US stock exchange NASDAQ. The proxies for the analyses are limit order lifetime, order-revision time, and inter-order placement time. The aim of these proxies is to capture the effect of actively trading algorithmic traders. We focus on high-frequency traders who place or delete limit orders within milli- or microseconds, which is a common phenomenon in modern order-driven markets. Chapter 1 prepares the empirical analyses by introducing the relevant literature on order-driven markets and the state of research on algorithmic trading. The most common finding of the mostly empirical papers is that algorithms tend to have positive effects on market quality; according to the current majority of the research on this topic, algorithms indeed improve market factors such as liquidity, price informativeness, etc. It is difficult, however, to measure the amount of algorithmic trading, because their orders are intermingled with all other orders and do not usually carry flags. Hendershott et al. (2011), for example, use electronic message traffic as a proxy for algorithmic trading. Some researchers who publish articles about algorithmic trading have special datasets at their dis-
132 posal. For example, Hendershott and Riordan (2009), Gsell (2009), and Maurer and Sch¨afer (2011) use datasets from Deutsche B¨orse, which carry flags for orders that are submitted by algorithms. Their results mainly show the beneficial effects of algorithmic trading. Without a dataset supplied from an exchange with a special indicator for algorithmic limit orders, or a dataset supplied directly from firms employing algorithmic trading engines, algorithmic trading is hard to come by. However, there are ways to analyse the order flow and see the effect of algorithms. For example, special predatory algorithms analyse real-time order flow in order to extract other algorithmic strategies. They try to reverseengineer other algorithms’ strategies to exploit their trading model. Of course, this could cost the reverse-engineered algorithm huge amounts of money in a short amount of time, because algorithms trade rapidly. Therefore, programmers of trading algorithms take special care to keep their trading strategies secret and camouflaged. However, the existence of reverseengineering algorithms shows that it is possible to extract information on algorithmic trading from anonymous order book information. In order to provide a tool to at least broadly estimate the extent of algorithmic trading, it is necessary to only rely on widely available data. The first empirical analysis with high-frequency order book data from NASDAQ follows in Chapter 2. In this chapter, we analyse the time-frame of Hasbrouck and Saar (2001, 2009), i.e., proxy values of zero to two seconds. We employ a dataset with a timestamp precision of a millisecond, which captures the algorithmic macro-level. We apply this dataset on the three proxies (lifetime, inter-order placement time, and revision time) on the stocks with the most inserted limit orders in the week of 9–13 October 2009. For the limit order lifetime, these results are robust because we can track any limit order with its day-unique order reference number. Thus, we calculate the limit order lifetime of the orders by subtracting the placement time from the deletion time. In contrast to that, the other two proxies – order-revision time and inter-order placement time – are subject to noise
Chapter 4. Conclusion
133
from other traders. However, it is safe to assume that the extent of noise in the small time frames that we analyse is negligible. Because of the anonymity of the order book, however, this cannot be proven. We find that lifetimes and inter-order placement times can be modelled with the Weibull distribution with the shape and scale parameter being lower than one, which yields a convex density converging to zero. Also, we look for irregularities in the density of the proxies. For limit order lifetimes, we have the comparable studies of Prix et al. (2007, 2008). In their first paper with data from 2004, they find peaks in the kernel densities at two, five, ten, and multiples of thirty seconds. In their second paper, they find more peaks at multiples of 250 milliseconds in their 2006 dataset. This suggests that the more time passes the faster algorithms become: technology of both exchanges and algorithmic traders improves and lets them set up ever-faster strategies. We do not find peaks all over the density of limit order lifetimes as found by Prix et al. (2007, 2008). However, also in the dataset which we employ, there are also visible peaks that can be seen in the density. For stocks listed on NASDAQ, there are peaks at 100 milliseconds and sometimes at 1,000 milliseconds (i.e., one tenth of a second and one second). However, the only significant peak is the one at 100 milliseconds. This suggests that the NASDAQ market, with latency times well below the one of Deutsche B¨orse’s Xetra, provides an even faster environment for algorithms. This shows in the other two proxies as well, even though we do not have a direct comparison from other studies. The inter-order placement time has comparable properties to the limit order lifetime. It has a hyperbolic density and shows one significant peak at 100 milliseconds, which means that usually orders arrive at a high speed and are clustered in time. The order-revision time, too, has a hyperbolic density but many significant peaks at multiples of 100 milliseconds. This shows that algorithms wait a few hundredths of a second before they place a new limit order after they deleted an old one, possibly to wait for a reaction from other algorithms.
134 The proxies could possibly serve as an indicator for algorithmic trading. At the moment, it seems unlikely that a shape factor of the fitted Weibull curve for the lifetime density can be recalculated into a percentage share of algorithmic trading. However, it is likely that the proxies can serve as an approximate ordinal scale for the extent of algorithmic trading on financial markets. In Chapter 3 we analyse the proxies more extensively by increasing the timestamp accuracy to a nanosecond. It is conventional wisdom that algorithms often trade in milliseconds; my research in Chapter 2 showed that a great deal of activity happens within the interval [0, 5] milliseconds. It was unnecessary to employ a better timestamp precision on the algorithmic macro level we analysed in Chapter 2. Now, the more accurate timestamp precision levels make it possible to analyse the effect of algorithmic trading on the nano level. It becomes apparent that all three proxies show clusters of observations at multiples of 50 microseconds. A significant share of lifetimes, and even more in the case of order-revision times and inter-order placement times, occurs within two milliseconds. For example, it is not uncommon that 50 per cent of the inter-order placement times and 40 per cent of order-revision times are lower than two milliseconds. In addition to this, we compare the proxies for ETFs and stocks. ETFs are usually baskets of assets, and their constituents and weightings are known. Therefore, their idiosyncratic risk is lower than for individual stocks. This enables algorithms to calculate optimal limit orders much more accurately and with less uncertainty, which leads to longer lifetimes, shorter inter-order placement times, and shorter order-revision times. I explain shorter revision times and longer lifetimes with the framework of a limit order risk function. The risk that a limit order does not fit optimally to the market increases with every change of the market that traders or their models perceive as potentially price-moving. Especially for stocks, idiosyncratic risk makes calculations of optimal limit orders difficult and uncertain. After a change of an influencing factor, limit orders are deleted to place a limit order that
Chapter 4. Conclusion
135
starts with a lower risk of sub-optimality. This behaviour yields decreasing lifetimes when IT becomes faster and the market models become more accurate. The limit order risk function explains lower revision times as well. Revision times become shorter when the computer can be certain that the model correctly mirrors the optimal limit order strategy. As market models become better, algorithms can constantly calculate the best limit order for the security within its market environment and do not need to wait for other algorithms or humans to make a statement by placing an order. This continuous calculation leads to more revision times of near zero. For ETFs, this effect is more pronounced than for stocks, because their fundamental value and therefore the optimal order strategy is easier to calculate. The limit order risk function only has less explanatory power for the difference of the steeper function of inter-order placement times for ETFs than for stocks. Even though the difference is not as well visible for interorder placement times than for the other two proxies, there is a small difference which could be explained by limit order clustering, i.e. the insertion of many non-marketable limit orders to gain a time-advantage over competitors. In Chapter 3, we have shown that a fairly large fraction of active trading activity that we measure with the three proxies takes place in the time region of a few milliseconds. In Chapter 2, we have shown that almost all limit order activity that the proxies capture takes place within two seconds or less. It is unlikely that these order dynamics are the effect of the behaviour of human traders. The results show that it should be possible to create a rough measure of algorithmic activity on limit order markets by using order book protocols. Human traders could then choose if they want to compete with their algorithmic counterparts or not. But even if a stock shows only little activity from algorithmic traders, badly priced limit orders can always be picked off by algorithms.
136 The two empirical chapters introduce and analyse the structure of three proxies to analyse trading algorithms on the milli- and nanolevel. Also, we compare ETFs and stocks. Beyond the scope of this thesis, future research could analyse the interdependency of the shape and scale factor of the Weibull fit and various market factors, e.g., volatility, upcoming news, proximate order activity, time of day, or other factors. A direct comparison of the actual share of algorithmic trade activity and the behaviour of the proxies could lead to a rough real-time measure for the share of algorithmic trading. It is not unlikely that its share (also its order to trade ratio etc.) can be modelled, which would lead to an increased accuracy of market models. Current multi-agent market models usually include some form of a noise trader, an informed trader, and a chartist; the implementation of an algorithmic trader, with order behaviours that we discovered, could potentially disrupt many findings from older papers. Algorithmic trading will continue to be an increasingly important factor in financial markets. Because the creators of algorithmic trading engines implement means to keep their functionality secret, definitive conclusions about the effects on the trading environment are difficult to draw. The majority of researchers find positive overall effects of algorithmic traders on market quality. Nonetheless, calls from practitioners for its regulation do not fade away. My work aims at helping find a way to analyse algorithmic trade activity in real time. The analysis of order dynamics in regions only algorithms can access could help traders to decide if they want to compete with algorithms or not.
137
Bibliography Amihud, Y. and Mendelson, H. (1980). Dealership market: Market-making with inventory. Journal of Financial Economics, 8(1):31–53. Arnuk, S. L. and Saluzzi, J. (2008). Toxic equity trading order flow on Wall Street. Mini White Paper, Themis Trading LLC. Arnuk, S. L. and Saluzzi, J. (2009). Why institutional investors should be concerned about high frequency traders. Mini White Paper, Themis Trading LLC. Bagehot, W. (1971). The only game in town. Financial Analysts Journal of Banking & Finance, 27. Berkman, H. and Comerton-Forde, C. (2011). Market microstructure: A review from Down Under. Accounting & Finance, 51(1):50–78. Biais, B., Foucault, T., and Moinas, S. (2010). Equilibrium algorithmic trading. Working paper, Toulouse School of Economics. Biais, B., Hillion, P., and Spatt, C. (1995). An empirical analysis of the limit order book and the order flow in the paris bourse. The Journal of Finance, 50(5):1655–1689. Brogaard, J. A. (2010). High frequency trading and its impact on market quality. Working paper, Northwestern University, Kellogg School of Management.
138
Bibliography
Brusco, S. and Gava, L. (2006). An analysis of cancellations in the Spanish stock exchange. Working paper, Universidad Carlos III de Madrid. CFTC and SEC (2010). Findings regarding the market events of May 6, 2010. Technical report, U.S. Commodity Futures Trading Commision, U.S. Securities & Exchange Commission. Chakrabarty, B. and Tyurin, K. (2008). Market liquidity, stock characteristics and order cancellations: The case of fleeting orders. Available at ssrn, St Louis University, Indiana University. Challet, D. and Stinchcombe, R. (2003). Limit order market analysis and modeling: on a universal cause for over-diffusive prices. Physica A, 324(1-2):141–145. Chiarella, C. and Iori, G. (2002). A simulation analysis of the microstructure of double auction markets. Quantitative Finance, 2(5):346–353. Cleves, M. A., Gould, W. W., and Gutierrez, R. G. (2002). An introduction to survival analysis using Stata. Stata Press, College Station, Texas. Copeland, T. E. and Galai, D. (1983). Information effects on the bid-ask spread. The Journal of Finance, 38(5):1457–1469. Cvitani´c, J. and Kirilenko, A. (2010). High frequency traders and asset prices. unpublished working paper, CFTC. Danielsson, J. and Payne, R. (2002). Liquidity determination in an order driven market. Workshop on Exchange Rate Modelling. Davis, R. (1957). The human operator as a single channel information channel. The Quarterly Journal of Experimental Psychology, 9(3):119– 129. Demsetz, H. (1968). The cost of transacting. The Quarterly Journal of Economics, 82(1):33–53.
Bibliography
139
Deutsche B¨orse (2008). Eurex halves latency of its trading system with new interface. http://deutsche-boerse.com/dbag/dispatch/en/ notescontent/gdb_navigation/press/10_Latest_Press_ Releases/10_All/INTEGRATE/mr_pressreleases?notesDoc= 8EE9931F4C5ACA76C125744E0027BC51.
Deutsche further
B¨orse (2009). reduction in
Deutsche B¨orse network latency.
systems
with
http:// deutsche-boerse.com/dbag/dispatch/en/notescontent/ gdb_navigation/press/10_Latest_Press_Releases/ 10_All/INTEGRATE/mr_pressreleases?notesDoc= BDC9D286FDEE3E44C12575B700374865.
Engle, R. and Russell, J. (1998). Autoregressive conditional duration: A new model for irregularly spaced transaction data. Econometrica, 66(5):1127–1162. Fong, K. Y. and Liu, W.-M. (2010). Limit order revisions. Journal of Banking & Finance, 34(8):1873 – 1885. New Contributions to Retail Payments: Conference at Norges Bank (Central Bank of Norway) 14– 15 November 2008. Foster, F. D. and Viswanathan, S. (1993). Variations in trading volume, return volatility, and trading costs: Evidence on recent price formation models. The Journal of Finance, 48(1):187–211. Foucault, T., Kadan, O., and Kandel, E. (2005). Limit order book as a market for liquidity. Review of Financial Studies, 18(4):1171–1217. Garman, M. B. (1976). Market microstructure. Journal of Financial Economics, 3(3):257–275. Gerig, A. and Michayluk, D. (2010). Automated liquidity provision and the demise of traditional market making. Working paper, University of Oxford, Said Business School.
140
Bibliography
Glosten, L. R. (1994). Is the electronic open limit order book inevitable? The Journal of Finance, 49(4):1127–1161. Glosten, L. R. and Milgrom, P. R. (1985). Bid, ask and transaction prices in a specialist market with heterogeneously informed traders. Journal of Financial Economics, 14(1):71–100. ¨ G¨odel, K. (1931). Uber formal unentscheidbare S¨atze der Principia Mathematica und verwandter Systeme I. Monatshefte f¨ur Mathematik, 38(1):173–198. Groth, S. (2009). Algorithmic trading engines and liquidity contribution: The blurring of “traditional” definitions. In Godart, C., Gronau, N., Sharma, S., and Canals, G., editors, Software Services for e-Business and e-Society, volume 305, pages 210–224. Springer Boston. Gsell, M. (2008). Assessing the impact of algorithmic trading on markets: A simulation approach. In ECIS 2008 Proceedings, Paper 225. Gsell, M. (2009). Algorithmic activity on Xetra. The Journal of Trading, 4(3):74–86. Gsell, M. and Gomber, P. (2009). Algorithmic trading engines versus human traders - do they behave different in securities markets? In ECIS 2009 Proceedings, Paper 71. Handa, P. and Schwartz, R. A. (1996). Limit order trading. Journal of Finance, 51(5):1835–1861. Hansch, O. (2003). Island tides: Exploring ECN liquidity. unpublished working paper, Smeal College of Business Administration, Pennsylvania State University. Harris, L. and Hasbrouck, J. (1996). Market vs. limit orders: The SuperDOT evidence on order submission strategy. The Journal of Financial and Quantitative Analysis, 31(2):213–231.
Bibliography
141
Hasbrouck, J. and Saar, G. (2001). Limit orders and volatility in a hybrid market: The Island ECN. Working paper, NYU. Hasbrouck, J. and Saar, G. (2009). Technology and liquidity provision: The blurring of traditional definitions. Journal of Financial Markets, 12(2):143–172. Hasbrouck, J. and Saar, G. (2011). Low-latency trading. Working paper, Stern School of Business (NYU) and Cornell University. Hendershott, T., Jones, C. M., and Menkveld, A. J. (2011). Does algorithmic trading improve liquidity? Journal of Finance, 66(1):1–33. Hendershott, T. and Riordan, R. (2009). Algorithmic trading and information. Unpublished Working Paper #09-08, NET Institute. Hofstadter, D. R. (1989). G¨odel, Escher, Bach: An Eternal Golden Braid. Basic Books, New York. Hollifield, B., Miller, R. A., Sand˚as, P., and Slive, J. (2006). Estimating the gains from trade in limit-order markets. The Journal of Finance, 61(6):2753–2804. Inoue, T. (2006). Diversification of stock exchange trading platforms. Journal of Trading, 1(3):65–72. Ivanov, P. C., Yuen, A., Podobnik, B., and Lee, Y. (2004). Common scaling patterns in intertrade times of U. S. stocks. Quantitative finance papers, arXiv.org. Jain, P. and Joh, G. (1988). The dependence between hourly prices and trading volume. Journal of Financial and Quantitative Analysis, 23(3):269–283. Large, J. (2004). Cancellation and uncertainty aversion on limit order books. OFRC Working Paper Series 2004fe04, Oxford Financial Research Centre.
Bibliography
142
Maurer, K.-O. and Sch¨afer, C. (2011). Analysis of binary trading patterns in Xetra. The Journal of Trading, 6(1):46–60. Muthuswamy, J., Palmer, J., Richie, N., and Webb, R. (2011). Highfrequency-trading: Implications for markets, regulators, and efficiency. The Journal of Trading, 6(1):87–97. Nanex (2010).
Analysis of the “flash crash”.
Internet source,
http://www.nanex.net/20100506/FlashCrashAnalysis_ CompleteText.html.
NASDAQ OMX (2010). Core technology. http://www.nasdaqomx. com/whatwedo/markettechnology/coretechnology/. Ni, X.-H., Jiang, Z.-Q., Gu, G.-F., Ren, F., Chen, W., and Zhou, W.-X. (2010). Scaling and memory in the non-Poisson process of limit order cancelation. Physica A: Statistical Mechanics and its Applications, 389(14):2751–2761. O’Hara, M. (2003). Market Microstructure Theory. Blackwell, Oxford. Pole, A. (2007). Statistical Arbitrage: Algorithmic Trading Insights and Techniques. John Wiley and Sons, Hoboken, New Jersey. Prix, J., Loistl, O., and H¨utl, M. (2007). Algorithmic trading patterns in Xetra orders. The European Journal of Finance, 13(8):717–739. Prix, J., Loistl, O., and H¨utl, M. (2008). Chain-structures in Xetra order data. Working paper, Vienna University of Economics and Business Administration. Ranaldo, A. (2004). Order aggressiveness in limit order book markets. Journal of Financial Markets, 7(1):53–74. Riordan, R. and Storkenmaier, A. (2011). Latency, liquidity and price discovery. Working paper, Karlsruhe Institute of Technology.
Bibliography
143
Saluzzi, J. (2009). High-frequency trading: Red flags and drug addiction. Themis Trading LLC. Securities and Exchange Commission (2010). Concept release on equity market structures. Technical Report Release No. 34-61358; File No. S7-02-10, SEC. Sewell, M. (2007). Market microstructure. University College London, http://finance.martinsewell.com/microstructure/ microstructure.pdf. Smith, R. (2010). Is high-frequency trading inducing changes in market microstructure and dynamics? Working paper, Bouchet-Franklin Institute. Storkenmaier, A. and Wagener, M. (2010). Research on competition and regulation in European equities trading: An executive summary. Technical report, Karlsruhe Institute of Technology.
Studienreihe der Stiftung Kreditwirtschaft an der Universität Hohenheim Bände 1 - 11 sind nicht mehr lieferbar. Band 12: Axel Tibor Kümmel: Bewertung von Kreditinstituten nach dem Shareholder Value Ansatz, 1994; 2. Aufl.; 1995. Band
13: Petra Schmidt: Insider Trading. Maßnahmen zur Vermeidung bei US-Banken; 1995.
Band
14: Alexander Grupp: Börseneintritt und Börsenaustritt. Individuelle und institutionelle Interessen; 1995.
Band
15: Heinrich Kerstien: Budgetierung in Kreditinstituten. Operative Ergebnisplanung auf der Basis entscheidungsorientierter Kalkulationsverfahren; 1995.
Band
16: Ulrich Gärtner: Die Kalkulation des Zinspositionserfolgs in Kreditinstituten; 1996.
Band
17: Ute Münstermann: Märkte für Risikokapital im Spannungsfeld von Organisationsfreiheit und Staatsaufsicht; 1996.
Band
18: Ulrike Müller: Going Public im Geschäftsfeld der Banken. Marktbetrachtungen, bankbezogene Anforderungen und Erfolgswirkungen; 1997.
Band
19: Daniel Reith: Innergenossenschaftlicher Wettbewerb im Bankensektor; 1997.
Band
20: Steffen Hörter: Shareholder Value-orientiertes Bank-Controlling; 1998.
Band
21: Philip von Boehm-Bezing: Eigenkapital für nicht börsennotierte Unternehmen durch Finanzintermediäre. Wirtschaftliche Bedeutung und institutionelle Rahmenbedingungen; 1998.
Band
22: Niko J. Kleinmann: Die Ausgestaltung der Ad-hoc-Publizität nach § 15 WpHG. Notwendigkeit einer segmentspezifischen Deregulierung; 1998.
Band
23: Elke Ebert: Startfinanzierung durch Kreditinstitute. Situationsanalyse und Lösungsansätze; 1998.
Band
24: Heinz O. Steinhübel: Die private Computerbörse für mittelständische Unternehmen. Ökonomische Notwendigkeit und rechtliche Zulässigkeit; 1998.
Band
25: Reiner Dietrich: Integrierte Kreditprüfung. Die Integration der computergestützten Kreditprüfung in die Gesamtbanksteuerung; 1998.
Band
26: Stefan Topp: Die Pre-Fusionsphase von Kreditinstituten. Eine Untersuchung der Entscheidungsprozesse und ihrer Strukturen; 1999.
Band
27: Bettina Korn: Vorstandsvergütung mit Aktienoptionen. Sicherung der Anreizkompatibilität als gesellschaftsrechtliche Gestaltungsaufgabe; 2000.
Band
28: Armin Lindtner: Asset Backed Securities – Ein Cash flow-Modell; 2001.
Band
29: Carsten Lausberg: Das Immobilienmarktrisiko deutscher Banken; 2001.
Band
30: Patrik Pohl: Risikobasierte Kapitalanforderungen als Instrument einer marktorientierten Bankenaufsicht – unter besonderer Berücksichtigung der bankaufsichtlichen Behandlung des Kreditrisikos; 2001.
Band
31: Joh. Heinr. von Stein/Friedrich Trautwein: Ausbildungscontrolling an Universitäten. Grundlagen, Implementierung und Perspektiven; 2002.
Band
32: Gaby Kienzler, Christiane Winz: Ausbildungsqualität bei Bankkaufleuten – aus der Sicht von Auszubildenden und Ausbildern, 2002.
Band
33: Joh. Heinr. von Stein, Holger G. Köckritz, Friedrich Trautwein (Hrsg.): E-Banking im Privatkundengeschäft. Eine Analyse strategischer Handlungsfelder, 2002.
Band
34: Antje Erndt, Steffen Metzner: Moderne Instrumente des Immobiliencontrollings. DCF-Bewertung und Kennzahlensysteme im Immobiliencontrolling, 2002.
Band
35: Sven A. Röckle: Schadensdatenbanken als Instrument zur Quantifizierung von Operational Risk in Kreditinstituten, 2002.
Band
36: Frank Kutschera: Kommunales Debt Management als Bankdienstleistung, 2003.
Band
37: Niklas Lach: Marktinformation durch Bankrechnungslegung im Dienste der Bankenaufsicht, 2003.
Band
38: Wigbert Böhm: Investor Relations der Emittenten von Unternehmensanleihen: Notwendigkeit, Nutzen und Konzeption einer gläubigerorientierten Informationspolitik, 2004.
Band
39: Andreas Russ: Kapitalmarktorientiertes Kreditrisikomanagement in der prozessbezogenen Kreditorganisation, 2004.
Band
40: Tim Arndt: Manager of Managers – Verträge. Outsourcing im Rahmen individueller Finanzportfolioverwaltung von Kredit- und Finanzdienstleistungsinstituten, 2004
Band
41: Manuela A. E. Schäfer: Prozessgetriebene multiperspektivische Unternehmenssteuerung: Beispielhafte Betrachtung anhand der deutschen Bausparkassen, 2004.
Band
42: Friedrich Trautwein: Berufliche Handlungskompetenz als Studienziel: Bedeutung, Einflussfaktoren und Förderungsmöglichkeiten beim betriebswirtschaftlichen Studium an Universitäten unter besonderer Berücksichtigung der Bankwirtschaft, 2004.
Band
43: Ekkehardt Anton Bauer: Theorie der staatlichen Venture Capital-Politik. Begründungsansätze, Wirkungen und Effizienz der staatlichen Subventionierung von Venture Capital, 2006.
Band
44: Ralf Kürten: Regionale Finanzplätze in Deutschland, 2006.
Band
45: Tatiana Glaser: Privatimmobilienfinanzierung in Russland und Möglichkeiten der Übertragung des deutschen Bausparsystems auf die Russische Föderation anhand des Beispiels von Sankt Petersburg, 2006.
Band
46: Elisabeth Doris Markel: Qualitative Bankenaufsicht. Auswirkungen auf die Bankunternehmungsführung, 2010.
Band
47: Matthias Johannsen: Stock Price Reaction to Earnings Information, 2010.
Band
48: Susanna Holzschneider: Valuationa and Underpricing of Initial Public Offerings, 2011.
Band
49: Arne Breuer: An Empirical Analysis of Order Dynamics in a High-Frequency Trading Environment, 2013.