Introduction
As long as financial markets have existed, people have tried to forecast them, in the hope that good forecasts would bring them great fortunes. In financial practice it is not the question whether it is possible to forecast, but how the future path of a financial time series can be forecasted. In academia, however, it is merely the question whether series of speculative prices can be forecasted than the question how to forecast. Therefore practice and academics have proceeded along different paths in studying financial time series data. For example, among practitioners fundamental and technical analysis are techniques developed in financial practice according to which guidelines financial time series should and could be forecasted. They are intended to give advice on what and when to buy or sell. In contrast, academics focus on the behavior and characteristics of a financial time series itself and try to explore whether there is certain dependence in successive price changes that could profitably be exploited by various kinds of trading techniques. However, early statistical studies concluded that successive price changes are independent. These empirical findings combined with the theory of Paul Samuelson, published in his influential paper ``Proof that Properly Anticipated Prices Fluctuate Randomly'' (1965), led to the efficient markets hypothesis (EMH). According to this hypothesis it is not possible to exploit any information set to predict future price changes. In another influential paper Eugene Fama (1970) reviewed the theoretical and empirical literature on the EMH to that date and concluded that the evidence in support of the EMH was very extensive, and that contradictory evidence was sparse. Since then the EMH is the central paradigm in financial economics.
Technical analysis has been a popular and heavily used technique for decades already in financial practice. It has grown to an industry on its own. During the 1990s there was a renewed interest in academia on the topic when it seemed that early studies which found technical analysis to be useless might have been premature. In this thesis a large number of trend-following technical trading techniques are studied and applied to various speculative price series. Their profitability as well as their forecasting ability will be statistically tested. Corrections will be made for transaction costs, risk and data snooping to answer the question whether one can really profit from perceived trending behavior in financial time series.
This introductory chapter is organized as follows. In section 1.1 the concepts of fundamental and technical analysis are presented and the philosophies underlying these techniques are explained. Also something will be said about the critiques on both methods. Next, in section 1.2 an overview of the academic literature on technical analysis and efficient markets is presented. Finally section 1.3 concludes with a brief outline of this thesis.
1.1 Financial practice
Fundamental analysis
Fundamental analysis found its existence in the firm-foundation theory, developed by numerous people in the 1930s, but finally worked out by John B. Williams. It was popularized by Graham and Dodd's book ``Security Analysis'' (1934) and by Graham's book ``The Intelligent Investor'' (1949). One of its most successful applicants known today is the investor Warren Buffet. The purpose of fundamental securities analysis is to find and explore all economic variables that influence the future earnings of a financial asset. These fundamental variables measure different economic circumstances, ranging from macro-economic (inflation, interest rates, oil prices, recessions, unemployment, etc.), industry specific (competition, demand/supply, technological changes, etc.) and firm specific (company growth, dividends, earnings, lawsuits, strikes etc.) circumstances. On the basis of these `economic fundamentals' a fundamental analyst tries to compute the true underlying value, also called the fundamental value, of a financial asset.
According to the firm-foundation theory the fundamental value of an asset should be equal to the discounted value of all future cash flows the asset will generate. The discount factor is taken to be the interest rate plus a risk premium and therefore the fundamental analyst must also make expectations about future interest rate developments. The fundamental value is thus based on historical data and expectations about future developments extracted from them. Only `news', which is new facts about the economic variables determining the true value of the fundamental asset, can change the fundamental value. If the computed fundamental value is higher (lower) than the market price, then the fundamental analyst concludes that the market over- (under-) values the asset. A long (short) position in the market should be taken to profit from this supposedly under- (over-) valuation. The philosophy behind fundamental analysis is that in the end, when enough traders realize that the market is not correctly pricing the asset, the market mechanism of demand/supply, will force the price of the asset to converge to its fundamental value. It is assumed that fundamental analysts who have better access to information and who have a more sophisticated system in interpreting and weighing the influence of information on future earnings will earn more than analysts who have less access to information and have a less sophisticated system in interpreting and weighing information. It is emphasized that sound investment principles will produce sound investment results, eliminating the psychology of the investors. Warren Buffet notices in the preface of ``The Intelligent Investor'' (1973): ``What's needed is a sound intellectual framework for making decisions and the ability to keep emotions from corroding that framework. The sillier the market's behavior, the greater the opportunity for the business-like investor.''
However, it is questionable whether traders can perform a complete fundamental analysis in determining the true value of a financial asset. An important critique is that fundamental traders have to examine a lot of different economic variables and that they have to know the precise effects of all these variables on the future cash flows of the asset. Furthermore, it may happen that the price of an asset, for example due to overreaction by traders, persistently deviates from the fundamental value. In that case, short term fundamental trading cannot be profitable and therefore it is said that fundamental analysis should be used to make long-term predictions. Then a problem may be that a fundamental trader does not have enough wealth and/or enough patience to wait until convergence finally occurs. Furthermore, it could be that financial markets affect fundamentals, which they are supposed to reflect. In that case they do not merely discount the future, but they help to shape it and financial markets will never tend toward equilibrium. Thus it is clear that it is a most hazardous task to perform accurate fundamental analysis. Keynes (1936, p.157) already pointed out the difficulty as follows: ``Investment based on genuine long-term expectation is so difficult as to be scarcely practicable. He who attempts it must surely lead much more laborious days and run greater risks than he who tries to guess better than the crowd how the crowd will behave; and, given equal intelligence, he may make more disastrous mistakes.''
On the other hand it may be possible for a trader to make a fortune by free riding on the expectations of all other traders together. Through the market mechanism of demand and supply the expectations of those traders will eventually be reflected in the asset price in a more or less gradual way. If a trader is engaged in this line of thinking, he leaves fundamental analysis and he moves into the area of technical analysis.
Technical analysis
Technical analysis is the study of past price movements with the goal to predict future price movements from the past. In his book ``The Stock Market Barometer'' (1922) William Peter Hamilton laid the foundation of the Dow Theory, the first theory of chart readers. The theory is based on editorials of Charles H. Dow when he was editor of the Wall Street Journal in the period 1889-1902. Robert Rhea popularized the idea in his 1930s market letters and his book ``The Dow Theory'' (1932). The philosophy underlying technical analysis can already for most part be found in this early work, developed after Dow's death in 1902. Charles Dow thought that expectations for the national economy were translated into market orders that caused stocks to rise or fall in prices over the long term together - usually in advance of actual economic developments. He believed that fundamental economic variables determine prices in the long run. To quantify his theory Charles Dow began to compute averages to measure market movements. This led to the existence of the Dow-Jones Industrial Average (DJIA) in May 1896 and the Dow-Jones Railroad Average (DJRA) in September 1896.
The Dow Theory assumes that all information is discounted in the averages, hence no other information is needed to make trading decisions. Further the theory makes use of Charles Dow's notion that there are three types of market movements: primary (also called major), secondary (also called intermediate) and tertiary (also called minor) upward and downward price movements, also called trends. It is the aim of the theory to detect the primary trend changes in an early stage. Minor trends tend to be much more influenced by random news events than the secondary and primary trends and are said to be therefore more difficult to identify. According to the Dow Theory bull and bear markets, that is primary upward and downward trends, are divisible in stages which reflect the moods of the investors.
The Dow Theory is based on Charles Dow's philosophy that ``the rails should take what the industrials make.'' Stated differently, the two averages DJIA and DJRA should confirm each other. If the two averages are rising it is time to buy; when both are decreasing it is time to sell. If they diverge, this is a warning signal. Also the Dow Theory states that volume should go with the prevailing primary trend. If the primary trend is upward (downward), volume should increase when price rises (declines) and should decrease when price declines (rises). Eventually the Dow Theory became the basis of what is known today as technical analysis. Although the theory bears Charles Dow's name, it is likely that he would deny any allegiance to it. Instead of being a chartist, Charles Dow as a financial reporter advocated to invest on sound fundamental economic variables, that is buying stocks when their prices are well below their fundamental values. His main purpose in developing the averages was to measure market cycles, rather than to use them to generate trading signals.
After the work of Hamilton and Rhea the technical analysis literature was expanded and refined by early pioneers such as Richard Schabacker, Robert Edwards, John Magee and later Welles Wilder and John Murphy. Technical analysis developed into a standard tool used by many financial practitioners to forecast the future price path of all kinds of financial assets such as stocks, bonds, futures and options. Nowadays a lot of technical analysis software packages are sold on the market. Technical analysis newsletters and journals flourish. Bookstores have shelves full of technical analysis literature. Every bank employs several chartists who write technical reports spreading around forecasts with all kinds of fancy techniques. Classes are organized to introduce the home investor to the topic. Technical analysis has become an industry on its own. Taylor and Allen (1992) conducted a questionnaire survey in 1988 on behalf of the Bank of England among chief foreign exchange dealers based in London. It is revealed that at least 90 percent of the respondents place some weight on technical analysis when forming views over some time horizons. There is also a skew towards reliance on technical, as opposed to fundamental, analysis at shorter horizons, which becomes steadily reversed as the length of the time horizon is increased. A high proportion of chief dealers view technical and fundamental analysis as complementary forms of analysis and a substantial proportion suggest that technical advice may be self-fulfilling. There is a feeling among market participants that it is important to have a notion of chartism, because many traders use it, and may therefore influence market prices. It is said that chartism can be used to exploit market movements generated by less sophisticated, `noise traders'. Menkhoff (1998) holds a questionnaire survey among foreign exchange professionals from banks and from fund management companies trading in Germany in August 1992. He concludes that many market participants use non-fundamental trading techniques. Cheung and Chinn (1999) conduct a mail survey among US foreign exchange traders between October 1996 and November 1997. The results indicate that in that time period technical trading best characterizes 30% of traders against 25% for fundamental analysis. All these studies show that technical analysis is broadly used in practice.
The general consensus among technical analysts is that there is no need to look at the fundamentals, because everything that is happening in the world can be seen in the price charts. A popular saying among chartists is that ``a picture is worth a ten thousand words.'' Price as the solution of the demand/supply mechanism reflects the dreams, expectations, guesses, hopes, moods and nightmares of all investors trading in the market. A true chartist does not even care to know which business or industry a firm is in, as long he can study its stock chart and knows its ticker symbol. The motto of Doyne Farmer's prediction company as quoted by Bass, 1999, p.102, was for example: ``If the market makes numbers out of information, one should be able to reverse the process and get information out of numbers.'' The philosophy behind technical analysis is that information is gradually discounted in the price of an asset. Except for a crash once in a while there is no `big bang' price movement that immediately discounts all available information. It is said that price gradually moves to new highs or new lows and that trading volume goes with the prevailing trend. Therefore most popular technical trading rules are trend following techniques such as moving averages and filters. Technical analysis tries to detect changes in investors' sentiments in an early stage and tries to profit from them. It is said that these changes in sentiments cause certain patterns to occur repeatedly in the price charts, because people react the same in equal circumstances. A lot of `subjective' pattern recognition techniques are therefore described in the technical analysis literature which have fancy names, such as head & shoulders, double top, double bottoms, triangles, rectangles, etc., which should be traded on after their pattern is completed.
An example: the moving-average technical trading rule.
 |
Figure 1.1: A 200-day moving-average trading rule applied to the AEX-index in the period March 1, 1996 through July 25, 2002.
|
At this point it is useful to illustrate technical trading by a simple example. One of the most popular technical trading rules is based on moving averages. A moving average is a recursively updated, for example daily, weekly or monthly, average of past prices. A moving average smoothes out erratic price movements and is supposed to reflect the underlying trend in prices. A buy (sell) signal is said to be generated at time t if the price crosses the moving average upwards (downwards) at time t. Figure 1.1 shows an example of a 200-day moving average applied to the Amsterdam Stock Exchange Index (AEX-index) in the period March 1, 1996 through July 25, 2002. The 200-day moving average is exhibited by the dotted line. It can be seen that the moving average follows the price at some distance. It changes direction after a change in the direction of the prices has occurred. By decreasing the number of days over which the moving average is computed, the distance can be made smaller, and trading signals occur more often. Despite that the 200-day moving-average trading rule is generating signals in some occasions too late, it can be seen that the trading rule succeeds in detecting large price moves that occurred in the index. In this thesis we will develop a technical trading rule set on the basis of simple trend-following trading techniques, such as the above moving-average strategy, as well as refinements with %-band-filters, time delay filters, fixed holding periods and stop-loss. We will test the profitability and predictability of a large class of such trading rules applied to a large number of financial asset price series.
Critiques on technical analysis
Technical analysis has been heavily criticized over the decades. One critique is that it trades when a trend is already established. By the time that a trend is signaled, it may already have taken place. Hence it is said that technical analysts are always trading too late.
As noted by Osler and Chang (1995, p.7), books on technical analysis fail in documenting the validity of their claims. Authors do not hesitate to characterize a pattern as frequent or reliable, without making an attempt to quantify those assessments. Profits are measured in isolation, without regard for opportunity costs or risk. The lack of a sound statistical analysis arises from the difficulty in programming technical pattern recognition techniques into a computer. Many technical trading rules seem to be somewhat vague statements without accurately mathematically defined patterns. However Neftci (1991) shows that most patterns used by technical analysts can be characterized by appropriate sequences of local minima and/or maxima. Lo, Mamaysky and Wang (2000) develop a pattern recognition system based on non-parametric kernel regression. They conclude (p.1753): ``Although human judgment is still superior to most computational algorithms in the area of visual pattern recognition, recent advances in statistical learning theory have had successful applications in fingerprint identification, handwriting analysis, and face recognition. Technical analysis may well be the next frontier for such methods.''
Furthermore, in financial practice technical analysis is criticized because of its highly subjective nature. It is said that there are probably as many methods of combining and interpreting the various techniques as there are chartists themselves. The geometric shapes in historical price charts are often in the eyes of the beholder. Fundamental analysis is compared with technical analysis like astronomy with astrology. It is claimed that technical analysis is voodoo finance and that chart reading shares a pedestal with alchemy. The attitude of academics towards technical analysis is well described by Malkiel (1996, p.139): ``Obviously, I'm biased against the chartist. This is not only a personal predilection but a professional one as well. Technical analysis is anathema to the academic world. We love to pick on it. Our bullying tactics are prompted by two considerations: (1) after paying transaction costs, the method does not do better than a buy-and-hold strategy for investors, and (2) it's easy to pick on. And while it may seem a bit unfair to pick on such a sorry target, just remember: It's your money we are trying to save.''
However, technical analysts acknowledge that their techniques are by no means foolproof. For example, Martin Pring (1998, p.5) notices about technical analysis: ``It can help in identifying the direction of a trend, but there is no known method of consistently forecasting its magnitude.'' Edwards and Magee (1998, p.12) notice: ``Chart analysis is certainly neither easy nor foolproof .'' Finally, Achelis (1995, p.6) remarks:``..., I caution you not to let the software lull you into believing markets are as logical and predictable as the computer you use to analyze them.'' Hence, even technical analysts warn against investment decisions based upon their charts alone.
Fundamental versus technical analysis
The big advantage of technical analysis over fundamental analysis is that it can be applied fairly easily and cheaply to all kinds of securities prices. Only some practice is needed in recognizing the patterns, but in principle everyone can apply it. Of course, there exist also some complex technical trading techniques, but technical analysis can be made as easy or as difficult as the user likes. Martin Pring (1997, p.3) for example notices that although computers make it more easy to come up with sophisticated trading rules, it is better to keep things as simple as possible.
Of course fundamental analysis can also be made as simple as one likes. For example, look at the number of cars parked at the lot of the shopping mall to get an indication of consumers' confidence in the national economy. Usually more (macro) economic variables are needed. That makes fundamental analysis more costly than technical analysis.
An advantage of technical analysis from an academic point of view is that it is much easier to test the forecasting power of well-defined objective technical trading rules than to test the forecasting power of trading rules based on fundamentals. For testing technical trading rules only data is needed on prices, volumes and dividends, which can be obtained fairly easily.
An essential difference between chart analysis and fundamental economic analysis is that chartists study only the price action of the market itself, whereas fundamentalists attempt to look for the reasons behind that action. However, both the fundamental analyst and the technical analyst make use of historical data, but in a different manner. The technical analyst claims that all information is gradually discounted in the prices, while the fundamental analyst uses all available information including many other economic variables to compute the `true' value. The pure technical analyst will never issue a price goal. He only trades on the buy and sell signals his strategies generate. In contrast, the fundamental analyst will issue a price goal that is based on the calculated fundamental value. However in practice investors expect also from technical analysts to issue price goals.
Neither fundamental nor technical analysis will lead to sure profits. Malkiel shows in his book ``A Random Walk down Wall Street'' (1996) that mutual funds, the main big users of fundamental analysis, are not able to outperform a general market index. In the period 1974-1990 at least two thirds of the mutual funds were beaten by the Standard & Poors 500 (Malkiel, 1996, p.184). Moreover, Cowles (1933, 1944) already noticed that analysts report more bullish signals than bearish ones, while in his studies the number of weeks the stock market advanced and declined were equal. Furthermore, fundamental analysts do not always report what they think, as became publicly known in the Merrill Lynch scandal. Internally analysts judged certain internet and telecommunications stocks as `piece of shit', abbreviated by `pos' at the end of internal email messages, while they gave their clients strong advices to buy the stocks of these companies. In 1998 the ``Long Term Capital Management'' (LTCM) fund filed for bankruptcy. This hedge fund was trading on the basis of mathematical models. Myron Scholes and Robert Merton, well known for the development and extension of the Black & Scholes option pricing model, were closely involved in this company. Under leadership of the New York Federal Reserve Bank, one the twelve central banks in the US, the financial world had to raise a great amount of money to prevent a big catastrophe. Because LTCM had large obligations in the derivatives markets, which they could not fulfill anymore, default of payments would have an influence on the profits of the financial companies who had taken the counterpart positions in the market. A sudden bankruptcy of LTCM could have led to a chain reaction on Wall Street and the rest of the financial world.
1.2 Technical analysis and efficient markets.
An overview
In this section we present a historical overview of the most important (academic) literature published on technical analysis and efficient markets.
Early work on technical analysis
Despite the fact that chartists have a strong belief in their forecasting abilities, in academia it remains questionable whether technical trading based on patterns or trends in past prices has any statistically significant forecasting power and whether it can profitably be exploited after correcting for transaction costs and risk. Cowles (1933) started by analyzing the weekly forecasting results of well-known professional agencies, such as financial services and fire insurance companies, in the period January 1928 through June 1932. The ability of selecting a specific stock which should generate superior returns, as well as the ability of forecasting the movement of the stock market itself is studied. Thousands of predictions are recorded. Cowles (1933) finds no statistically significant forecasting performance. Furthermore Cowles (1933) considered the 26-year forecasting record of William Peter Hamilton in the period December 1903 until his death in December 1929. During this period Hamilton wrote 255 editorials in the Wall Street Journal which presented forecasts for the stock market based on the Dow Theory. It is found that Hamilton could not beat a continuous investment in the DJIA or the DJRA after correcting for the effect of brokerage charges, cash dividends and interest earned if no position is held in the market. On 90 occasions Hamilton announced changes in the outlook for the market. Cowles (1933) finds that 45 of the changes of position were unsuccessful and that 45 were successful. Cowles (1944) repeats the analysis for 11 forecasting companies for the longer period January 1928 through July 1943. Again no evidence of forecasting power is found. However, although the number of months the stock market declined exceeded the number of months the stock market rose, and although the level of the stock market in July 1943 was lower than at the beginning of the sample period, Cowles (1944) finds that more bullish signals are published than bearish. Cowles (1944, p.210) argues that this peculiar result can be explained by the fact that readers prefer good news to bad, and that a forecaster who presents a cheerful point of view thereby attracts more followers without whom he would probably be unable to remain long in the forecasting business.
Random walk hypothesis
While Cowles (1933, 1944) focused on testing analysts' advices, other academics focused on time series behavior. Working (1934), Kendall (1953) and Roberts (1959) found for series of speculative prices, such as American commodity prices of wheat and cotton, British indices of industrial share prices and the DJIA, that successive price changes are linearly independent, as measured by autocorrelation, and that these series may be well defined by random walks. According to the random walk hypothesis trends in prices are spurious and purely accidentally manifestations. Therefore, trading systems based on past information should not generate profits in excess of equilibrium expected profits or returns. It became commonly accepted that the study of past price trends and patterns is no more useful in predicting future price movements than throwing a dart at the list of stocks in a daily newspaper.
However the dependence in price changes can be of such a complicated form that standard linear statistical tools, such as serial correlations, may provide misleading measures of the degree of dependence in the data. Therefore Alexander (1961) began defining filters to reveal possible trends in stock prices which may be masked by the jiggling of the market. A filter strategy buys when price increases by x percent from a recent low and sells when price declines by x percent from a recent high. Thus filters can be used to identify local peaks and troughs according to the filter size. He applies several filters to the DJIA in the period 1897-1929 and the S&P Industrials in the period 1929-1959. Alexander (1961) concludes that in speculative markets a price move, once initiated, tends to persist. Thus he concludes that the basic philosophy underlying technical analysis, that is prices move in trends, holds. However he notices that commissions could reduce the results found. Mandelbrot (1963, p.418) notes that there is a flaw in the computations of Alexander (1961), since he assumes that the trader can buy exactly at the low plus x percent and can sell exactly at the high minus x percent. However in real trading this will probably not be the case. Further it was argued that traders cannot buy the averages and that investors can change the price themselves if they try to invest according to the filters. In Alexander (1964) the computing mistake is corrected and allowance is made for transaction costs. The filter rules still show considerable excess profits over the buy-and-hold strategy, but transaction costs wipe out all the profits. It is concluded that an investor who is not a floor trader and must pay commissions should turn to other sources of advice on how to beat the buy-and-hold benchmark. Alexander (1964) also tests other mechanical trading rules, such as Dow-type formulas and old technical trading rules called formula Dazhi, formula Dafilt and finally the also nowadays popular moving averages. These techniques provided much better profits than the filter techniques. The results led Alexander (1964) still to conclude that the independence assumption of the random walk had been overturned.
Theil and Leenders (1965) investigate the dependence of the proportion of securities that advance, decline or remain unchanged between successive days for approximately 450 stocks traded at the Amsterdam Stock Exchange in the period November 1959 through October 1963. They find that there is considerable positive dependence in successive values of securities advancing, declining and remaining unchanged at the Amsterdam Stock Exchange. It is concluded that if stocks in general advanced yesterday, they will probably also advance today. Fama (1965b) replicates the Theil and Leenders test for the NYSE. In contrast to the results of Theil and Leenders (1965), Fama (1965b) finds that the proportions of securities advancing and declining today on the NYSE do not provide much help in predicting the proportions advancing and declining tomorrow. Fama (1965b) concludes that this contradiction in results could be caused by economic factors that are unique to the Amsterdam Exchange.
Fama (1965a) tries to show with various tests that price changes are independent and that therefore the past history of stock prices cannot be used to make meaningful predictions concerning its future behavior. Moreover, if it is found that there is some dependence, then Fama argues that this dependence is too small to be profitably exploited because of transaction costs. Fama (1965a) applies serial correlation tests, runs tests and Alexander's filter technique to daily data of 30 individual stocks quoted in the DJIA in the period January 1956 through September 1962. A runs test counts the number of sequences and reversals in a returns series. Two consecutive returns of the same sign are counted as a sequence, if they are of opposite sign they are counted as a reversal. The serial correlation tests show that the dependence in successive price changes is either extremely small or non-existent. Also the runs tests do not show a large degree of dependence. Profits of the filter techniques are calculated by trading blocks of 100 shares and are corrected for dividends and transaction costs. The results show no profitability. Hence Fama (1965a) concludes that the largest profits under the filter technique would seem to be those of the broker.
The paper of Fama and Blume (1966) studies Alexander's filters applied to the same data set as in Fama (1965a). A set of filters is applied to each of the 30 stocks quoted in the DJIA with and without correction for dividends and transaction costs. The data set is divided in days during which long and short positions are held. They show that the short positions initiated are disastrous for the investor. But even if positions were only held at buy signals, the buy-and-hold strategy cannot consistently be outperformed. Until the 1990s Fama and Blume (1966) remained the best known and most influential paper on mechanical trading rules. The results caused academic skepticism concerning the usefulness of technical analysis.
Return and risk
Diversification of wealth over multiple securities reduces the risk of investing. The phrase ``don't put all your eggs in one basket'' is already well known for a long time. Markowitz (1952) argued that every rule that does not imply the superiority of diversification must be rejected both as hypothesis to explain and as a principle to guide investment behavior. Therefore Markowitz (1952, 1959) published a formal model of portfolio selection embodying diversification principles, called the expected returns-variance of returns rule (E-V-rule). The model determines for any given level of anticipated return the portfolio with the lowest risk and for any given levels of risk the portfolio with the highest expected return. This optimization procedure leads to the well-known efficient frontier of risky assets. Markowitz (1952, 1959) argues that portfolios found on the efficient frontier consist of firms operating in different industries, because firms in industries with different economic characteristics have lower covariance than firms within an industry. Further it was shown how by maximizing a capital allocation line (CAL) on the efficient frontier the optimal risky portfolio could be determined. Finally, by maximizing a personal utility function on the CAL, a personal asset allocation between a risk-free asset and the optimal risky portfolio can be derived.
An expected positive price change can be the reward needed to attract investors to hold a risky asset and bear the corresponding risk. Then prices need not be perfectly random, even if markets are operating efficiently and rationally. With his work Markowitz (1952, 1959) laid the foundation of the capital asset pricing model (CAPM) developed by Sharpe (1964) and Lintner (1965). They show that under the assumptions that investors have homogeneous expectations and optimally hold mean-variance efficient portfolios, and in the absence of market frictions, a broad-weighted market portfolio will itself be a mean-variance efficient portfolio. This market portfolio is the tangency portfolio of the CAL with the efficient frontier. The great merit of the CAPM was, despite its strict and unrealistic assumptions, that it showed the relationship between the risk of an asset and its expected return. The notion of trade-off between risk and rewards also triggered the question whether the profits generated by technical trading rule signals were not just the reward of bearing risky asset positions.
Levy (1967) applies relative strength as a criterion for investment selection to weekly closing prices of 200 stocks listed on the NYSE for the 260-week period beginning October 24, 1960 and ending October 15, 1965. All price series are adjusted for splits, stock dividends, and for the reinvestment of both cash dividends and proceeds received from the sale of rights. The relative strength strategy buys those stocks that have performed well in the past. Levy (1967) concludes that the profits attainable by purchasing the historically strongest stocks are superior to the profits of the random walk. Thus in contrast to earlier results he finds stock market prices to be forecastable by using the relative strength rule. However Levy (1967) notices that the random walk hypothesis is not refuted by these findings, because superior profits could be attributable to the incurrence of extraordinary risk and he remarks that it is therefore necessary to determine the riskiness of the various technical measures he tested.
Jensen (1967) indicates that the results of Levy (1967) could be the result of selection bias. Technical trading rules that performed well in the past get most attention by researchers, and if they are back-tested, then of course they generate good results. Jensen and Benington (1969) apply the relative strength procedure of Levy (1967) to monthly closing prices of every security traded on the NYSE over the period January 1926 to March 1966, in total 1952 securities. They conclude that after allowance for transaction costs and correction for risk the trading rules did not on average earn significantly more than the buy-and-hold policy.
James (1968) is one of the firsts who tests moving-average trading strategies. That is, signals are generated by a crossing of the price through a moving average of past prices. He finds no superior performance for these rules when applied to end of month data of stocks traded at the NYSE in the period 1926-1960.
Efficient markets hypothesis
Besides testing the random walk theory with serial correlation tests, runs tests and by applying technical trading rules used in practice, academics were searching for a theory that could explain the random walk behavior of stock prices. In 1965 Samuelson published his ``Proof that properly anticipated prices fluctuate randomly .'' He argues that in an informational efficient market price changes must be unforecastable if they are properly anticipated, i.e., if they fully incorporate the expectations and information of all market participants. Because news is announced randomly, since otherwise it would not be news anymore, prices must fluctuate randomly. This important observation, combined with the notion that positive earnings are the reward for bearing risk, and the earlier empirical findings that successive price changes are independent, led to the efficient markets hypothesis. Especially the notion of trade-off between reward and risk distinguishes the efficient markets hypothesis from the random walk theory, which is merely a purely statistical model of returns.
The influential paper of Fama (1970) reviews the theoretical and empirical literature on the efficient markets model until that date. Fama (1970) distinguishes three forms of market efficiency. A financial market is called weak efficient, if no trading rule can be developed that can forecast future price movements on the basis of past prices. Secondly, a financial market is called semi-strong efficient, if it is impossible to forecast future price movements on the basis of publicly known information. Finally, a financial market is called strong efficient if on the basis of all available information, also inside information, it is not possible to forecast future price movements. Semi-strong efficiency implies weak-form efficiency. Strong efficiency implies semi-strong and weak efficiency. If the weak form of the EMH can be rejected, then also the semi strong and strong form of the EMH can be rejected. Fama (1970) concludes that the evidence in support of the efficient markets model is very extensive, and that contradictory evidence is sparse. The impact of the empirical findings on random walk behavior and the conclusion in academia that financial asset prices are and should be unforecastable was so large, that it took a while before new academic literature on technical trading was published. Financial analysts heavily debated the efficient markets hypothesis. However, as argued by academics, even if the theory of Samuelson would be wrong, then there are still many empirical findings of no forecastability.
Market technicians kept arguing that statistical tests of any kind are less capable of detecting subtle patterns in stock price data than the human eye. Thus Arditti and McCollough (1978) argued that if stock price series have information content, then technicians should be able to differentiate between actual price data and random walk data generated from the same statistical parameters. For each of five randomly chosen stocks from the NYSE in the year 1969 they showed 14 New York based CFAs (Chartered Financial Analyst, the CFA program is a globally recognized standard for measuring the competence and integrity of financial analysts) with more than five years of experience three graphs, the actual price series plus two random price series. The analysts were asked to pick the actual price series using any technical forecasting tool they wanted. The results reveal that the technicians were not able to make consistently correct selections. Thus Arditti and McCollough (1978) conclude that past price data provide little or no information useful for technical analysis, because analysts cannot differentiate between price series with information content and price series with no information content.
Technical analysis in the foreign currency markets
One of the earliest studies of the profitability of technical trading rules in foreign exchange markets is Dooley and Shafer (1983). Very high liquidity, low bid-ask spreads and round-the-clock decentralized trading characterize exchange rate markets for foreign currency. Furthermore, because of their size, these markets are relatively immune to insider trading. Dooley and Shafer (1983) address the question whether the observed short-run variability in exchange rates since the start of generalized floating exchange rates in March 1973 is caused by technical traders or is caused by severe fundamental shocks. In the former case the exchange rate path could be interpreted in terms of price runs, bandwagons, and technical corrections, while in the latter case frequent revisions on the basis of small information occurs and the market is efficient in taking into account whatever information is available. They follow the study of Fama (1965, 1970) by applying serial correlation tests, runs tests and seven filter trading rules in the range [1%, 25%] to the US Dollar (USD) prices of the Belgium Franc (BF), Canadian Dollar (CD), French Franc (FF), German Mark (DEM), Italian Lira (IL), Japanese Yen (JPY), Dutch Guilder (DGL), Swiss Franc (SF), and the British Pound (BP) in the period March 1973 through November 1981. Adjustment is made for overnight Eurocurrency interest rate differentials to account for the predictable component of changes in daily spot exchange rates. In an earlier study Dooley and Shafer (1976) already found that the filters yielded substantial profits from March 1975 until October 1975 even if careful account was taken of opportunity costs in terms of interest rate differentials and transactions costs. It is noted that these good results could be the result of chance and therefore the period October 1975 through November 1981 is considered to serve as an out-of-sample testing period for which it is unlikely that the good results for the filters continue to hold if the exchange markets are really efficient. Dooley and Shafer (1983) report that there is significant autocorrelation present in the data and that there is evidence of substantial profits to all but the largest filters, casting doubt on the weak form of the efficient markets hypothesis. Further, they find a relation between the variability of exchange rates, as measured by the standard deviation of the daily returns, and the filter rules' profits. A large increase in the variability is associated with a dramatic increase in the profitability of the filters. They also compare the results generated in the actual exchange rate data with results generated by random walk and autoregressive models, which in the end cannot explain the findings.
Sweeney (1986) develops a test of the significance of filter rule profits that explicitly assumes constant risk/return trade-off due to constant risk premia. Seven different filter rules in the range [0.5%, 10%] are applied to the US Dollar against the BF, BP, CD, DEM, FF, IL, JPY, SF, Swedish Krone (SK) and Spanish Peseta (SP) exchange rates in the period 1975-1980. It is found that excess rates of return of filter rules persist into the 1980s, even after correcting for transaction costs and risk.
After his study on exchange rates, Sweeney (1988) focuses on a subset of the 30 stocks in the DJIA for which the 0.5% filter rule yielded the most promising results in the Fama and Blume (1966) paper, which comprehends the 1956-1962 period. He finds that by focusing on the winners in the previous period of the Fama and Blume (1966) paper significant profits over the buy-and-hold can be made for all selected stocks in the period 1970-1982 by investors with low but feasible transaction costs, most likely floor traders. Sweeney (1988) questions why the market seems to be weak-form inefficient according to his results. Sweeny argues that the costs of a seat on an exchange are just the risk-adjusted present value of the profits that could be made. Another possibility is that if a trader tries to trade according to a predefined trading strategy, he can move the market itself and therefore cannot reap the profits. Finally Sweeney (1988) concludes that excess return may be the reward for putting in the effort for finding the rule which can exploit irregularities. Hence after correcting for research costs the market may be efficient in the end.
Schulmeister (1988) observes that USD/DEM exchange rate movements are characterized by a sequence of upward and downward trends in the period March 1973 to March 1988. For two moving averages, two momentum strategies, two combinations of moving averages and momentum and finally one support-and-resistance rule, reported to be widely used in practice, it is concluded that they yield systematically and significant profits. Schulmeister (1988) remarks that the combined strategy is developed and truly applied in trading by Citicorp. No correction is made for transaction costs and interest rate differentials. However, for the period October 1986 through March 1988 a reduction in profits is noticed, which is explained by the stabilizing effects of the Louvre accord of February 22, 1987. The goal of this agreement was to keep the USD/DEM/JPY exchange rates stable. The philosophy behind the accord was that when those three key currencies were stable, then the other currencies of the world could link into the system and world currencies could more or less stabilize, reducing currency risks in foreign trade.
Renewed interest in the 1990s
Little work on technical analysis appeared during the 1970s and 1980s, because the efficient markets hypothesis was the dominating paradigm in finance. Brock, Lakonishok and LeBaron (1992) test the forecastability of a set of 26 simple technical trading rules by applying them to the closing prices of the DJIA in the period January 1897 through December 1986, nearly 90 years of data. The set of trading rules consists of moving average strategies and support-and-resistance rules, very popular trading rules among technical trading practitioners. Brock et al. (1992) recognize the danger of data snooping. That is, the performance of the best forecasting model found in a given data set by a certain specification search could be just the result of chance instead of truly superior forecasting power. They admit that their choice of trading rules could be the result of survivorship bias, because they consulted a technical analyst. However they claim that they mitigate the problem of data snooping by (1) reporting the results of all tested trading strategies, (2) utilizing a very long data set, and (3) emphasizing the robustness of the results across various non-overlapping subperiods for statistical inference. Brock et al. (1992) find that all trading rules yield significant profits above the buy-and-hold benchmark in all periods by using simple t-ratios as test statistics. Moreover they find that buy signals consistently generate higher returns than sell signals and that the returns following buy signals are less volatile than the returns following sell signals. However t-ratios are only valid under the assumption of stationary and time independent return distributions. Stock returns exhibit several well-known deviations from these assumptions like leptokurtosis, autocorrelation, dependence in the squared returns (volatility clustering or conditional heteroskedasticity), and changing conditional means (risk premia). The results found could therefore be the consequence of using invalid significance tests. To overcome this problem Brock et al. (1992) were the first who extended standard statistical analysis with parametric bootstrap techniques, inspired by Efron (1979), Freedman and Peters (1984a, 1984b) and Efron and Tibshirani (1986). Brock et al. (1992) find that the patterns uncovered by their technical trading rules cannot be explained by first order autocorrelation and by changing expected returns caused by changes in volatility. Stated differently, the predictive ability of the technical trading rules found is not consistent with a random walk, an AR(1), a GARCH-in-mean model, or an exponential GARCH model. Therefore Brock et al. (1992) conclude that the conclusion reached in earlier studies that technical analysis is useless may have been premature. However they acknowledge that the good results of the technical trading rules can be offset by transaction costs.
The strong results of Brock, Lakonishok and LeBaron (1992) led to a renewed interest in academia for testing the forecastability of technical trading rules. It was the impetus for many papers published on the topic in the 1990s. Notice however that Brock et al. (1992) in fact do not apply the correct t-test statistic, as derived in footnote 9, page 1738. See section 2.5 in Chapter 2 of this thesis for a further discussion on this topic.
Levich and Thomas (1993) criticize Dooley and Shafer (1983) for not reporting any measures of statistical significance of the technical trading rule profits. Therefore Levich and Thomas (1993) are the first who apply the bootstrap methodology, as introduced by Brock et al. (1992), to exchange rate data. Six filters and three moving averages are applied to the US Dollar closing settlement prices of the BP, CD, DEM, JPY and SF futures contracts traded at the International Monetary Market of the Chicago Mercantile Exchange in the period January 1973 through December 1990. Levich and Thomas (1993) note that the trading rules tested are very popular ones and that the parameterizations are taken from earlier literature. Just like Brock et al. (1992) they report that they mitigate the problem of data mining by showing the results for all strategies. It is found that the simple technical trading rules generate unusual profits (no corrections are made for transaction costs) and that a random walk model cannot explain these profits. However there is some deterioration over time in the profitability of the trading rules, especially in the 1986-1990 period.
Lee and Mathur (1995) remark that most surveys, whose findings are in favor of technical trading if applied to exchange rate data, are conducted on US Dollar denominated currencies and that the positive results are likely to be caused by central bank intervention. Therefore to test market efficiency of European foreign exchange markets they apply 45 different crossover moving-average trading strategies to six European spot cross-rates (JPY/BP, DEM/BP, JPY/DEM, SF/DEM and JPY/SF) in the May, 1988 to December, 1993 period. A correction for 0.1% transaction costs per trade is made. They find that moving-average trading rules are marginally profitable only for the JPY/DEM and JPY/SF cross rates, currencies that do not belong to the European exchange rate mechanism (ERM). Further it is found that in periods during which central bank intervention is believed to have taken place, trading rules do not show to be profitable in the European cross rates. Finally Lee and Mathur (1995) propose to apply a recursively optimizing test procedure with a rolling window for the purpose of testing out-of-sample forecasting power. Every year the best trading rule of the previous half-year is applied. Also this out-of-sample test procedure rejects the null hypothesis that moving averages have forecasting power. It is concluded that the effect of target zones on the dynamics of the ERM exchange rates may be partly responsible for the lack of profitability of moving-average trading rules. The dynamics of ERM exchange rates are different from those of common exchange ranges in that they have smaller volatility.
Bessembinder and Chan (1995) test whether the trading rule set of Brock et al. (1992) has forecasting power when applied to the stock market indices of Japan, Hong Kong, South Korea, Malaysia, Thailand and Taiwan in the period January 1975 through December 1989. Break-even transaction costs that eliminate the excess return of a double or out strategy over a buy-and-hold are computed. The rules are most successful in the markets of Malaysia, Thailand and Taiwan, where the buy-sell difference is on average 51.9% yearly. Break-even round-trip transaction costs are estimated to be 1.57% on average (1.34% in the case if a one-day lag in trading is incorporated). It is concluded that excess profits over the buy-and-hold could be made, but emphasis is placed on the fact that the relative riskiness of the technical trading strategies is not controlled for.
For the UK stock market Hudson, Dempsey and Keasey (1996) test the trading rule set of Brock et al. (1992) on daily data of the Financial Times Industrial Ordinary index, which consists of 30 UK companies, in the period July 1935 to January 1994. They want to examine whether the same set of trading rules outperforms the buy-and-hold on a different market. It is computed that the trading rules on average generate an excess return of 0.8% per transaction over the buy-and-hold, but that the costs of implementing the strategy are at least 1% per transaction. Further when looking at the subperiod 1981-1994, the trading rules seem to lose their forecasting power. Hence Hudson et al. (1996) conclude that although the technical trading rules examined do have predictive ability, their use would not allow investors to make excess returns in the presence of costly trading. Additionally Mills (1997) simultaneously finds in the case of zero transaction costs with the bootstrap technique introduced by Brock et al. (1992) that the good results for the period 1935-1980 cannot be explained by an AR-ARCH model for the daily returns. Again, for the period after 1980 it is found that the trading rules do not generate statistically significant results. Mills (1997) concludes that the trading rules mainly worked when the market was driftless but performed badly in the period after 1980, because the buy-and-hold strategy was clearly dominating.
Kho (1996) tests a limited number of double crossover moving-average trading rules on weekly data of BP, DEM, JPY, SF futures contracts traded on the International Monetary Market (IMM) division of the Chicago Mercantile Exchange from January 1980 through December 1991. The results show that there have been profit opportunities that could have been exploited by moving-average trading rules. The measured profits are so high that they cannot be explained by transaction costs, serial correlation in the returns or a simple volatility expected return relation (GARCH-in-mean model). Next, Kho (1996) estimates a conditional CAPM model that captures the time-varying price of risk. It is concluded that the technical trading rule profits found can be well explained by time-varying risk premia.
Bessembinder and Chan (1998) redo the calculations of Brock et al. (1992) for the period 1926-1991 to assess the economic significance of the Brock et al. (1992) findings. Corrections are made for transaction costs and dividends. One-month treasury bills are used as proxy for the risk-free interest rate if no trading position is held in the market. Furthermore, also a correction is made for non-synchronous trading by lagging trading signals for one day. It is computed that one-way break-even transaction costs are approximately 0.39% for the full sample. However they decline from 0.54% in the first subperiod 1926-1943 to 0.22% in the last subperiod 1976-1991. Knez and Ready (1996) estimate the average bid-ask spread between 0.11 and 0.13%, while Chan and Lakonishok (1993) estimate commissions costs to be 0.13%. Together this adds to approximately 0.24 to 0.26% transaction costs for institutional traders in the last subperiod. In earlier years trading costs were probably higher. Thus the break-even one-way transaction costs of 0.22% in the last subperiod are clearly smaller than the real estimated transaction costs of 0.26% per trade. Although Bessembinder and Chan (1998) confirm the results of Brock et al. (1992), they conclude that there is little reason to view the evidence of Brock et al. (1992) as indicative of market inefficiency.
Fernández-Rodríguez, Sosvilla-Rivero, and Andrada-Félix (2001) replicate the testing procedures of Brock et al. (1992) for daily data of the General Index of the Madrid Stock Exchange (IGBM) in the period January 1966 through October 1997. They find that technical trading rules show forecastability in the Madrid Stock Exchange, but acknowledge that they didn't include transaction costs. Furthermore, the bootstrap results show that several null models for stock returns such as the AR(1), GARCH and GARCH-in-mean models cannot explain the forecasting power of the technical trading rules.
Ratner and Leal (1999) apply ten moving-average trading rules to daily local index inflation corrected closing levels for Argentina (Bolsa Indice General), Brazil (Indice BOVESPA), Chile (Indice General de Precios), India (Bombay Sensitive), Korea (Seoul Composite Index), Malaysia (Kuala Lumpur Composite Index), Mexico (Indice de Precios y Cotaciones), the Philippines (Manila Composite Index), Taiwan (Taipei Weighted Price Index) and Thailand (Bangkok S.E.T.) in the period January 1982 through April 1995. After correcting for transaction costs, the rules appear to be significantly profitable only in Taiwan, Thailand and Mexico. However, when not looking at significance, in more than 80% of the cases the trading rules correctly predict the direction of changes in prices.
Isakov and Hollistein (1999) test simple technical trading rules on the Swiss Bank Corporation (SBC) General Index and to some of its individual stocks UBS, ABB, Nestle, Ciba-Geigy and Zurich in the period 1969-1997. They are the first who extend moving-average trading strategies with momentum indicators or oscillators, so called relative strength or stochastics. These oscillators should indicate when an asset is overbought or oversold and they are supposed to give appropriate signals when to step out of the market. Isakov and Hollistein (1999) find that the use of oscillators does not add to the performance of the moving averages. For the basic moving average strategies they find an average yearly excess return of 18% on the SBC index. Bootstrap simulations show that an AR(1) or GARCH(1,1) model for asset returns cannot explain the predictability of the trading rules. However it is concluded that in the presence of trading costs the rules are only profitable for a particular kind of investor, namely if the costs are not higher than 0.3-0.7% per transaction, and that therefore weak-form efficiency cannot be rejected for small investors.
LeBaron (2000b) tests a 30-week single crossover moving-average trading strategy on weekly data at the close of London markets on Wednesdays of the US Dollar against the BP, DEM and JPY in the period June 1973 through May 1998. It is found that the strategy performed very well on all three exchange rates in the subperiod 1973-1989, yielding significant positive excess returns of 8, 6.8 and 10.2% yearly for the BP, DM and JPY respectively. However for the subperiod 1990-1998 the results are not significant anymore. LeBaron (2000b) argues that this reduction in forecastability may be explained by changes in the foreign exchange markets, such as lower transaction costs allowing traders to better arbitrage, foreign exchange intervention, the internet or a better general knowledge of technical trading rules. Another possibility is that trading rules are profitable only over very long periods, but can go through long periods in which they lose money, during which most users of the rules are driven out of the market.
LeBaron (2000a) reviews the paper of Brock et al. (1992) and tests whether the results found for the DJIA in the period 1897-1986 also hold for the period after 1986. Two technical trading rules are applied to the data set, namely the 150-day single crossover moving-average rule, because the research of Brock et al. (1992) pointed out that this rule performed consistently well over a couple of subperiods, and a 150-day momentum strategy. LeBaron (2000a) finds that the results of Brock et al. (1992) change dramatically in the period 1988-1999. The trading rules seem to have lost their predictive ability. For the period 1897-1986 the results could not be explained by a random walk model for stock returns, but for the period 1988-1999, in contrast, it is concluded that the null of a random walk cannot be rejected.
Coutts and Cheung (2000) apply the technical trading rule set of Brock et al. (1992) to daily data of the Hang Seng Index quoted at the Hong Kong Stock Exchange (HKSE) for the period October 1985 through June 1997. It is found that the trading range break-out rules yield better results than the moving averages. Although the trading rules show significant forecasting power, it is concluded that after correcting for transaction costs the trading rules cannot profitably be exploited. In contrast, Ming Ming, Mat Nor and Krishnan Guru (2000) find significant forecasting power for the strategies of Brock et al. (1992) when applied to the Kuala Lumpur Composite Index (KLCI) even after correction for transaction costs.
Detry and Gregoire (2001) test 10 moving-average trading rules of Brock et al. (1992) on the indices of all 15 countries in the European Union. They find that their results strongly support the conclusion of Brock et al. (1992) for the predictive ability of moving-average rules. However the computed break-even transaction costs are often of the same magnitude as actual transaction costs encountered by professional traders.
In his master's thesis Langedijk (2001) tests the predictability of the variable moving-average trading rules of Brock et al. (1992) on three foreign exchange rates, namely USD/DEM, JPY/DEM and USD/JPY, in the period July 1973 through June 2001. By using simple t-ratios he finds that technical trading rules have predictive ability in the subperiod July 1973 through June 1986, but that the results deteriorate for the period thereafter. Because for the USD/JPY exchange rate the strongest results in favor of technical trading are found, standard statistical analysis is extended by the bootstrap methodology of Brock et al. (1992). It is found that random walk, autoregressive and GARCH models cannot explain the results. However Langedijk (2001) shows that only large investors with low transaction costs can profitably exploit the trading rules.
Intra-day data
Most papers written on the profitability of technical trading rules use daily data. But there is also some literature testing the strategies on intra-day data. Ready (1997) shows that profits of technical trading rules applied to the largest 20% stocks of the NYSE in the period 1970-1995 disappear, if transaction costs as well as the time delay between the signal of a trading rule and the actual trade are taken into account. Further, he also finds that trading rules perform much worse in the period 1990-1995. Curcio, Goodhart, Guillaume and Payne (1997) apply technical trading rules, based on support-and-resistance levels, identified and supplied by technical analysts, to intra-daily data of foreign exchange markets (DEM/USD, JPY/USD, BP/USD). They find that no profits can be made on average when transaction costs, due to bid-ask spreads, are taken into account.
Pattern recognition
Academic research on the effectiveness of technical analysis in financial markets, as reviewed above, mainly implements filters, moving averages, momentum and support-and-resistance rules. These technical indicators are fairly easy to program into a computer. However the range of technical trading techniques is very broad and an important part deals with visual pattern recognition. The claim by technical analysts of the presence of geometric shapes in historical price charts is often criticized as being too subjective, intuitive or even vague. Levy (1971) was the first to examine 32 possible forms of five point chart patterns, i.e. a pattern with two highs and three lows or two lows and three highs, which are claimed to represent channels, wedges, diamonds, symmetrical triangles, (reverse) head-and-shoulders, triple tops, and triple bottoms. Local extrema are determined with the help of Alexander's (1961) filter techniques. After trading costs are taken into account it is concluded that none of the 32 patterns show any evidence of profitable forecasting ability in either bullish or bearish direction when applied to 548 NYSE securities in the period July 1964 through July 1969. Neftci (1991) shows that technical patterns can be fully characterized by using appropriate sequences of local minima and maxima. Hence it is concluded that any pattern can potentially be formalized. Osler and Chang (1995) were the first to evaluate the predictive power of head-and-shoulders patterns using a computer-implemented algorithm in foreign exchange rates. The features of the head-and-shoulders pattern are defined to be described by local minima and maxima that are found by applying Alexander's (1961) filter techniques. The pattern recognition algorithm is applied to six currencies (JPY, DEM, CD, SF, FF and BP against the USD) in the period March 1973 to June 1994. Significance is tested with the bootstrap methodology described by Brock et al. (1992) under the null of a random walk and GARCH model. It is found that the head-and-shoulders pattern had significant predictive power for the DEM and the JPY, also after correcting for transaction costs and interest rate differentials. Lo, Mamaysky and Wang (2000) develop a pattern recognition algorithm based on non-parametric kernel regression to detect (inverse) head-and-shoulders, broadening tops and bottoms, triangle tops and bottoms, rectangle tops and bottoms, and double tops and bottoms - patterns that are the most difficult to quantify analytically. The pattern recognition algorithm is applied to hundreds of NYSE and NASDAQ quoted stocks in the period 1962-1996. It is found that technical patterns do provide incremental information, especially for NASDAQ stocks. Further it is found that the most common patterns are double tops and bottoms, and (inverted) head-and-shoulders.
The dangers of data snooping
Data snooping is the generic term of the danger that the best forecasting model found in a given data set by a certain specification search is just the result of chance instead of the result of truly superior forecasting power. Jensen (1967) already argued that the good results of the relative-strength trading rule used by Levy (1967) could be the result of survivorship bias. That is, strategies that performed well in the past get the most attention by researchers. Jensen and Benington (1969, p.470) go a step further and argue: ``Likewise given enough computer time, we are sure that we can find a mechanical trading rule which works on a table of random numbers - provided of course that we are allowed to test the same rule on the same table of numbers which we used to discover the rule. We realize of course that the rule would prove useless on any other table of random numbers, and this is exactly the issue with Levy's results.''
Another form of data snooping is the publication bias. It is a well-known fact that studies presenting unusual results are more likely to be published than the studies that just confirm a well-known theory. The problem of data snooping was addressed in most of the work on technical analysis, but for a long time there was no test procedure to test for it. Finally White (2000), building on the work of Diebold and Mariano (1995) and West (1996), developed a simple and straightforward procedure for testing the null hypothesis that the best forecasting model encountered in a specification search has no predictive superiority over a given benchmark model. The alternative is of course that the best forecasting model is superior to the benchmark. Summarized in simple terms, the procedure bootstraps the original time series a great number of times, preserving the key characteristics of the time series. White (2000) recommends the stationary bootstrap of Politis and Romano (1994a, 1994b). Next, the specification search for the best forecasting model is executed for each bootstrapped series, which yields an empirical distribution of the performance of the best forecasting model. The null hypothesis is rejected at the a percent significance level if the performance of the best forecasting model on the original time series is greater than the a percent cut off level of the empirical distribution. This procedure was called White's Reality Check (RC) for data snooping.
Sullivan, Timmermann and White (1999, 2001) utilize the RC to evaluate simple technical trading strategies and calendar effects applied to the DJIA in the period 1897-1996. Sullivan et al. (1999) take the study of Brock et al. (1992) as starting point and construct an extensive set of 7846 trading rules, consisting of filters, moving averages, support-and-resistance, channel break-outs and on-balance volume averages. It is demonstrated that the results of Brock et al. (1992) hold after correction for data snooping, but that the forecasting performance tends to have disappeared in the period after the end of 1986. For the calendar effects, for example the January, Friday and the turn of the month effect, Sullivan et al. (2001) find that the RC in all periods does not reject the null hypothesis that the best forecasting rule encountered in the specification search does not have superior predictive ability over the buy-and-hold benchmark. If no correction were made for the specification search, then in both papers the conclusion would have been that the best model would have significant superior forecasting power over the benchmark. Hence Sullivan et al. (1999, 2000) conclude that it is very important to correct for data snooping for otherwise one can make wrong inferences about the significance of the best model found.
Hansen (2001) identifies a similarity condition for asymptotic tests of composite hypotheses, shows that this condition is a necessary condition for a test to be unbiased. He shows that White's RC does not satisfy this condition. This causes the RC to be an asymptotically biased test, which yields inconsistent p-values. Moreover, the test is sensitive to the inclusion of poor and irrelevant models in the comparison. Further, the test has poor power properties. Therefore, within the framework of White (2000), he applies the similarity condition to derive a test for superior predictive ability (SPA). The null hypothesis of this test is that none of the alternative models in the specification search is superior to the benchmark model, or stated differently, the benchmark model is not inferior to any alternative model. The alternative is that one or more of the alternative models are superior to the benchmark model. Hansen (2001) uses the RC and the SPA-test to evaluate forecasting models applied to US annual inflation in the period 1952-2000. He shows that the null hypothesis is neither rejected by the SPA-test p-value, nor by the RC p-value, but that there is a large difference between both p-values, likely to be caused by poor models in the space of forecasting models.
Grandia (2002) utilizes in his master's thesis the RC and the SPA-test to evaluate the forecasting ability of a large set of technical trading strategies applied to stocks quoted at the Amsterdam Stock Exchange in the period January 1973 through December 2001. He finds that the best trading strategy out of the set of filters, moving averages and trading range break-out rules can generate excess profits over the buy-and-hold even in the presence of transaction costs, but is not superior to the buy-and-hold benchmark after correction for the specification search. The results are stable across the subperiods 1973-1986 and 1987-2001.
Conclusions from the literature
Technical analysis is heavily used in practice to make forecasts about speculative price series. However, early statistical studies found that successive price changes are linearly independent, as measured by autocorrelation, and that financial price series may be well defined by random walks. In that case technical trading should not provide valuable trading signals. However, it was argued that the dependence in price changes might be of such a complicated nonlinear form that standard linear statistical tools might provide misleading measures of the degree of dependence in the data. Therefore several papers appeared in the academic literature testing the profitability of technical analysis. The general consensus in academic research on technical analysis is that there is some but not much dependence in speculative prices that can be exploited by nonlinear technical trading rules. Moreover, any found profitability seems to disappear after correcting for transaction costs and risk. Only floor traders who face very small transaction costs can possibly reap profits from technical trading. Most papers consider a small set of technical trading rules that are said to be widely known and frequently used in practice. This causes the danger of data snooping. However, after correction for the specification search, it is still found that those technical trading rules show forecasting power in the presence of small transaction costs. It is noted by many authors that the forecasting power of technical trading rules seems to disappear in the stock markets as well as in the currency markets during the 1990s, if there was any predictive power before. It is argued that this is likely to be caused by computerized trading programs that take advantage of any kind of patterns discovered before the mid 1990s causing any profit opportunity to disappear.
1.3 Outline of the thesis
The efficient markets hypothesis states that in highly competitive and developed markets it is impossible to derive a trading strategy that can generate persistent excess profits after correction for risk and transaction costs. Andrew Lo, in the introduction of Paul Cootner's ``The Random Character of Stock Prices'' (2000 reprint, p.xi), suggests even to extend the definition of efficient markets so that profits accrue only to those who acquire and maintain a competitive advantage. Then, those profits may simply be the fair reward for unusual skill, extraordinary effort or breakthroughs in financial technology. The goal of this thesis is to test the weak form of the efficient markets hypothesis by applying a broad range of technical trading strategies to a large number of different data sets. In particular we focus on the question whether, after correcting for transaction costs, risk and data snooping, technical trading rules have statistically significant forecasting power and can generate economically significant profits. This section briefly outlines the different chapters of the thesis. The chapters are written independently from each other with a separate introduction for each chapter. Now and then there is some repetition in the text, but this is mainly done to keep each chapter self contained. Chapters 2 through 5 are mainly empirical, while Chapter 6 describes a theoretical model.
In Chapter 2 a large set of 5350 trend-following technical trading rules is applied to the price series of cocoa futures contracts traded at the London International Financial Futures Exchange (LIFFE) and the New York Coffee, Sugar and Cocoa Exchange (CSCE), in the period January 1983 through June 1997. The trading rule set is also applied to the Pound-Dollar exchange rate in the same period. It is found that 58% of the trading rules generates a strictly positive excess return, even if a correction is made for transaction costs, when applied to the LIFFE cocoa futures prices. Moreover, a large set of trading rules exhibits statistically significant forecasting power if applied to the LIFFE cocoa futures series. On the other hand the same set of strategies performs poor on the CSCE cocoa futures prices, with only 12% generating strictly positive excess returns and hardly showing any statistically significant forecasting power. Bootstrap techniques reveal that the good results found for the LIFFE cocoa futures price series cannot be explained by several popular null models like a random walk, autoregressive and GARCH model, but can be explained by a structural break in trend model. The large difference in the performance of technical trading may be attributed to a combination of the demand/supply mechanism in the cocoa market and an accidental influence of the Pound-Dollar exchange rate, reinforcing trends in the LIFFE cocoa futures but weakening trends in the CSCE cocoa futures. Furthermore, our case study suggests a connection between the success or failure of technical trading and the relative magnitudes of trend, volatility and autocorrelation of the underlying series.
In the next three chapters, Chapters 3-5, a set of trend-following technical trading rules is applied to the price history of several stocks and stock market indices. Two different performance measures are used to select the best technical trading strategy, namely the mean return and the Sharpe ratio criterion. Corrections are made for transaction costs. If technical trading shows to be profitable, then it could be the case that these profits are merely the reward for bearing the risk of implementing technical trading. Therefore Sharpe-Lintner capital asset pricing models (CAPMs) are estimated to test this hypothesis. Furthermore, if technical trading shows economically and statistically significant forecasting power after corrections are made for transaction costs and risk, then it is tested whether the selected technical trading strategy is genuinely superior to the buy-and-hold benchmark also after a correction is made for data snooping. Tests utilized to correct for data snooping are White's (2000) Reality Check (RC) and Hansen's (2001) test for superior predictive ability (SPA). Finally, it is tested with a recursively optimizing and testing method whether technical trading shows true out-of-sample forecasting power. For example, recursively at the beginning of each month the strategy with the highest performance during the preceding six months is selected to generate trading signals in that month.
In Chapter 3 a set of 787 trend-following technical trading rules is applied to the Dow-Jones Industrial Average (DJIA) and to 34 stocks listed in the DJIA in the period January 1973 through June 2001. Because numerous research papers found that technical trading rules show economically and statistically significant forecasting power in the era until 1987, but not in the period thereafter, we split our sample in two subperiods: 1973-1986 and 1987-2002. For the mean return as well as the Sharpe ratio selection criterion it is found that in all periods for each data series a technical trading rule can be found that is capable of beating the buy-and-hold benchmark, even if a correction is made for transaction costs. Furthermore, if no transaction costs are implemented, then for most data series it is found by estimating Sharpe-Lintner CAPMs that technical trading generates risk-corrected excess returns over the risk-free interest rate. However, as transaction costs increase the null hypothesis that technical trading rule profits are just the reward for bearing risk is not rejected for more and more data series. Moreover, if as little as 0.25% transaction costs are implemented, then the null hypothesis that the best technical trading strategy found in a data set is not superior to the buy-and-hold benchmark after a correction is made for data snooping, is neither rejected by the RC nor by the SPA-test for all data series examined. Finally, the recursive optimizing and testing method does not show economically and statistically significant risk-corrected out-of-sample forecasting power of technical trading. Thus, in this chapter no evidence is found that trend-following technical trading rules can forecast the direction of the future price path of the DJIA and stocks listed in the DJIA.
In Chapter 4 the same technical trading rule set is applied to the Amsterdam Stock Exchange Index (AEX-index) and to 50 stocks listed in the AEX-index in the period January 1983 through May 2002. For both selection criteria it is found that for each data series a technical trading strategy can be selected that is capable of beating the buy-and-hold benchmark, also after correction for transaction costs. Furthermore, by estimating Sharpe-Lintner CAPMs it is found for both selection criteria in the presence of 1% transaction costs that for approximately half of the data series the best technical trading strategy has statistically significant risk-corrected forecasting power and even reduces risk of trading. Next, a correction is made for data snooping by applying the RC and the SPA-test. If the mean return criterion is used for selecting the best strategy, then both tests lead for almost all data series to the same conclusion if as little as 0.10% transaction costs are implemented, namely that the best technical trading strategy selected by the mean return criterion is not capable of beating the buy-and-hold benchmark after correcting for the specification search that is used to select the best strategy. In contrast, if the Sharpe ratio selection criterion is used, then for one third of the data series the null of no superior forecasting power is rejected by the SPA-test, even after correction for 1% transaction costs. Thus in contrast to the findings for the stocks listed in the DJIA in Chapter 3, we find that technical trading has economically and statistically significant forecasting power for a group of stocks listed in the AEX-index, after a correction is made for transaction costs, risk and data snooping, if the Sharpe ratio criterion is used for selecting the best technical trading strategy. Finally, the recursive optimizing and testing method does show out-of-sample forecasting profits of technical trading. Estimation of Sharpe-Lintner CAPMs shows, after correction for 0.10% transaction costs, that the best recursive optimizing and testing method has statistically significant risk-corrected forecasting power for more than 40% of the data series examined. However, if transaction costs increase to 0.50% per trade, then for almost all data series the best recursive optimizing and testing procedure has no statistically significant risk-corrected forecasting power anymore. Thus only for sufficiently low transaction costs technical trading is economically and statistically significant for a group of stocks listed in the AEX-index.
In Chapter 5 the set of 787 trend-following technical trading strategies is applied to 50 local main stock market indices in Africa, North and South America, Asia, Europe, the Middle East and the Pacific, and to the MSCI World Index in the period January 1981 through June 2002. We consider the case of an US-based trader and recompute all profits in US Dollars. It is found that half of the indices could not even beat a continuous risk-free investment. However, as in Chapters 3 and 4 it is found for both selection criteria that for each stock market index a technical trading strategy can be selected that is capable of beating the buy-and-hold benchmark, also after correction for transaction costs. Furthermore, after implementing 1% costs per trade, still for half of the indices a statistically significant risk-corrected forecasting power is found by estimating CAPMs. If also a correction is made for data snooping, then we find as in Chapter 4 that both selection criteria yield different results. In the presence of 0.50% transaction costs the null hypothesis of no superior predictive ability of the best technical trading strategy selected by the mean return criterion over the buy-and-hold benchmark after correcting for the specification search is not rejected for most indices by both the RC and SPA-test. However, if the Sharpe ratio criterion is used to select the best strategy, then for one fourth of the indices, mainly the Asian ones, the null hypothesis of no superior forecastability is rejected by the SPA-test, even in the presence of 1% transaction costs. Finally, the recursive optimizing and testing method does show out-of-sample forecasting profits, also in the presence of transaction costs, mainly for the Asian, Latin American, Middle East and Russian stock market indices. However, for the US, Japanese and most Western European stock market indices the recursive out-of-sample forecasting procedure does not show to be profitable, after implementing little transaction costs. Moreover, for sufficiently high transaction costs it is found, by estimating CAPMs, that technical trading shows no statistically significant risk-corrected out-of-sample forecasting power for almost all of the stock market indices. Only for low transaction costs (£ 0.25% per trade) economically and statistically significant risk-corrected out-of-sample forecasting power of trend-following technical trading techniques is found for the Asian, Latin American, Middle East and Russian stock market indices.
In Chapter 6 a financial market model with heterogeneous adaptively learning agents is developed. The agents can choose between a fundamental forecasting rule and a technical trading rule. The fundamental forecasting rule predicts that the price returns back to the fundamental value with a certain speed, whereas the technical trading rule is based on moving averages. The model in this chapter extends the Brock and Hommes (1998) heterogeneous agents model by adding a moving-average technical trading strategy to the set of beliefs the agents can choose from, but deviates by assuming constant relative risk aversion, so that agents choosing the same forecasting rule invest the same fraction of their wealth in the risky asset. The local dynamical behavior of the model around the fundamental steady state is studied by varying the values of the model parameters. A mixture of theoretical and numerical methods is used to analyze the dynamics. In particular we show that the fundamental steady state may become unstable due to a Hopf bifurcation. The interaction between fundamentalists and technical traders may thus cause prices to deviate from their fundamental value. In this heterogeneous world the fundamental traders are not able to drive the moving average traders out of the market, but fundamentalists and technical analysts coexist forever with their relative importance changing over time.
|