[Note for bibliographic reference: Melberg, Hans O. (1997), Visual econometrics -
A review of Wonnacott and Wonnacott, http://www.oocities.org/hmelberg/papers/970425b.htm]
Visual econometrics
A review of Part I of Econometrics by Wonnacott and
Wonnacott
by Hans O. Melberg
Thomas H. and Ronald J. Wonnacott
Econometrics
John Wiley, New York, 1979 (second edition, first edition 1970).
ISBN: 0-471-05514-X, 580 pages
(Part I: Elementary Econometrics, 320 pages, Part II: More advanced econometrics
(essentially a repetition of part I using matrix algebra)
Introduction
Econometrics by Wonnacott and Wonnacott is a classic for students of econometrics.
This is not undeserved. The authors are very good at conveying the intuitive idea behind
the various econometric tools. They are also reasonably rigorous, but sensibly they
present the most of the complicated mathematics in footnotes and appendicies. Lastly, they
are by no means uncritical to the procedures of statistical inference.
Despite the above praise, there are significant problems. First, I felt more could be
made of their somewhat ad-hoc critiques. For example, the problem of
autocorrelation is not an isolated mechanical problem to be fixed by data-transformation,
but a problem that results from a general approach to statistics - in which case the fix
is to reformule the whole model. Second, the organization of their discussion on time
series could be improved with the classical structure of a chapter on each of the
following topics: heteroscedasticity, autocorrelation, multicollinearity and
simultaneity/identification problems. Third, the book suffers from the fact that it has
not been updated since 1979. This means that many new developments, such as
co-integration, have been left out. They do, however, have a good chapter on Bayesian
inference.
The clarity of explanation
To demonstrate the clarity of their explanations, I shall give one example involving their
discussion of simultaneous equations.
The problem of simultaneous equations can be illustrated by a simple example involving
the consumption function. Assume consumption (C) is a function of income (Y), a constant
(a) and a random error (e). We also know that by definition, total income must be equal to
consumption and investment (I) (in a closed economy). Formally we have:
(1) C = a + bY + e
(2) Y = C + I
The task is then to estimate the consumption function based on a set of observations.
One might try to do so by the Ordinary Least Squares (OLS) technique, but as we know the
efficiency and unbiasedness of this technique depend on several assumptions. The set of
assumptions include that the error term should be independent of the explanatory variables
(here e should be independent of X) (for the full set of assumptions see Koutsoyannis p.
55-58). Why is this so?
Take the simple regression y = bx + e (small letters indicate deviations from the
mean). Let S stand for covariance, and take the covariance between x and the terms in (1).
We then get:
(3) Sxy = b Sxx + Sxe
Manipulating this we get:
(4) Sxy / Sxx = b + Sxe / Sxx
We already know that ^b = Sxy / Sxx, so (4) can be written as:
(5) ^b = b + Sxe / Sxx
In order for ^b to be an unbiased estimator of b, the last term in this equation must
be zero. We now see what is wrong with the OLS estimates of the consumption function. In
equation (1) C is a function of Y and e. However, as the identity in (2) shows, Y is
itself a function of C and therefore also e. Hence, there is by definition a
correlation between Y and e and the OLS estimates are biased as proved in equation (5).
Wonnacott and Wonnacott illustrate this problem of simultaneous equation bias in a
wonderful and simple figure (p. 259 and 260). This is only one example of a more general
visual approach. Throughout the whole book there are equally instructive and simple
explanations of statistical problems using graphical illustrations. I consider this to be
a major advantage of Wonnacott and Wonnacott's approach. (see p. 103 for another great
graphical illustration).
Completeness
Wonnacott and Wonnacott have struck about the right balance between rigour and intuitive
explanations. As mentioned, there are numerous footnotes and appendices which should
satisfy the mathematically inclined reader, and - at the same time - the text itself
relies more on visual illustrations than mathematical proofs.
There is, however, a second aspect of completeness - that of dealing with all the major
topics in econometrics. On this account the book sometimes fails. There is, for example,
no mention of error correction mechanisms or co-integration and unit-roots. Today these
are important concepts, worthy of inclusion even in introductory courses. On the other
hand, there is a very good chapter on Bayesian inference - a relatively ignored topic even
today. Finally, there is no chapter with a basic review of statistical concepts.
Altogether my conclusion is that the book could deserve a new edition with some new
chapters.
Awareness of the limitations of econometrics
Wonnacott and Wonnacott cannot be accused of having an uncritical attitude toward
statistical techniques. The write about the "arbitrary and unsatisfactory"
application of a 5% significance level when choosing which regressors to retain (p. 186).
They also want to use the term "statistically discernible" instead of
"statistically significant" - which indicates an awareness of the well known but
often ignored distinction between statistical significance and economic
significance (p.36. For more on this se Leamer's article in JEL, Winter 1996). Some other
examples of critical remarks include their warning against the Durbin-Watson test since it
"becomes invalid in the face of autocorrelation in the dependent variable" (p.
233); the emphasis on the p-value instead of the t-value since "an hypothesis test
may be a very inefficient way of communicating the information provided by the data."
(p. 90). Finally, they do not try to hide the central role of theory and a priori
beliefs in econometrics - for example when choosing which regressors to retain (p. 88).
Despite all these critical notes, Wonnacott and Wonnacott could have benefited from a
more structured critique of what is wrong with much applied econometrics. I believe this
abundance of isolated critical remarks, but lack of overall connection between the
remarks, stems from their view on the purpose of econometrics. In the introduction they
write that "Econometrics is the measurement of ... causal relationships, either to
show how the economy operates, or to make predictions about the future" (p. 3). What
is missing from this view is the explicit acknowledgement of the feedback loop between the
results of econometric testing and theoretical formulation (see Maddala (1992), p. 6 for
more on feedback): The role of econometrics not only to test the relationships implied in
economic theories, but also to help revise these theories. Wonnacott and Wonnacott will
not disagree with this, but they lack a systematic discussion of this process which would
have led to a more unified critique of econometrics.
The problem described above can be illustrated with a concrete example. Assume that you
discover autocorrelation in the error term after you have estimated a model using OLS.
This implies that the OLS estimates are unbiased, but inefficient(though the degree of
inefficiency need not be large). To "solve" the problem of autocorrelation
Wonnacott and Wonnacott suggests that the correlation in the error term could be estimated
by the regression:
(6) êt = p êt-1 + vt
(In fact, Wonnacott and Wonnacott prefers a slightly different method based on
Generalized Least Squares, but the idea is the same: to find the strength of correlation
between the error terms)
The problem with this approach, as David Hendry has pointed out, is that
autocorrelation most likely is a sign of a misspecified relationship (omitted variables or
non-linearities). Thus, the correct approach would be to revise the model, not to
"fix" the data so it fits our original model. The intuitive idea is clear: If
the data do not fit, something is wrong with the theory - not the data. At this point a
bell should ring in the head of the reader. I argued that the problem with Wonnacott and
Wonnacott was that they did not take enough explicit account of the role of econometrics
in revising theory. Here we have a concrete example: Autocorrelation should lead to a
revision of theory. Wonnacott and Wonnacott fails to note this since their focus is mainly
on using econometrics to measure existing theoretical relationships - not the feedback
loop whereby testing leads to a revision in the theory.
What should be do when faced with autocorrelation, and what does Wonnacott and
Wonnacott prescribe? Assume we have the following structure:
(7) yt = b xt + et
(8) et = p et-1 + vt
By substituting (8) into (7) we get:
(9) yt = b xt + p et-1 + vt
From (7) we know that:
(10) yt-1 = b xt-1 + et-1
Manipulating this we get:
(11) et-1 = yt-1 - b xt-1
If we substitute (11) into (9) we have:
(12) yt = b xt + p yt-1 - p b xt-1 + vt
In this model there is no problem with autocorrelation - the e's are gone and v is
uncorrelated with the other independent variables. We could then estimate p by - for
example - equation (6). This is what Wonnacott and Wonnacott presents as one solution to
autocorrelation. The problem with this approach is that it imposes restrictions on the
model without justification based on economic theory. For example, there is no reason to
assume that the coefficient of xt should have the same value as the coefficient
of xt-1 (namely b). Without restrictions the general model would be:
(13) yt = b1 yt-1 + b2 xt + b3
xt-1 + vt
And the restrictions necessary to get (12) from (13) are that b1 = 1 and b2
= - b3 (or, in short: b1 b2 = b3).
The point of all this is that it shows how econometricians developed a
"technical" solution to the problem of autocorrelation - instead of taking it as
a sign that they should revise their theory (include more variables, improve the dynamics
of the equation). This problem, I believe, originates with the view that econometrics is
limited to measuring strengths of causal relationships, as opposed to providing a feedback
used to formulate new theories.
Stylistic weaknesses
As mentioned I have little but praise for the way Wonnacott and Wonnacott explain
econometrics. I do, however, have a slight problem with the way they organize their
material on time series (chapter 6). Although the discussion is clear and intuitive, I
could not help but wondering whether a little more structure would have made the reader
understand exactly where the problems discussed fitted into the general overview of
econometric problems. The classic - and in my opinion still the best - structure would be
to first present OLS and its assumptions. The next step is to examine what happens when
the assumptions break down - the standard cases being when you have heteroscedasticity,
autocorrelation, multicollinearity, and simultaneity/identification problems. Each of
these would then be discussed in a chapter. Although Wonnacott and Wonnacott cover all
these topics, but I felt that the structure of this discussion could be improved.
As for other stylistic mistakes, I found very few typing mistakes (to be exact, I found
only one - in the figure on p. 103 the text should be "Fitted regression
plane", not "Fitted refression plane".) Moreover, the book includes
useful exercises (and answers to odd-numbered problems), as well as detailed worked
examples. Altogether, there is little to complain about on grounds of style.
A last thought: MAD vs. OLS
A few years ago, when I did introductory econometrics, I did not quite understand why we
wanted to mimimize the sum of the square of the error terms. "Why not minimize
the sum of the absolute error values?", I though. As long as we minimize the
squares we undeservingly put relatively more weight on the far away observations compared
to minimization of absolute deviation. Of course, I know that the algebra of using
absolute values so is not as easy as the algebra resulting from minimization of the
squared errors. Nevertheless, when we leave the calculation to computers this is not a
problem we need to worry about. So, why don't we minimize absolute deviations?
Wonnacott and Wonnacott give one argument (p. 13). They argue, by way of a graphical
presentation, that minimizing the absolute deviations (MAD) may result in a regression
line which pays no attention to one observation (in their example, the middle
observation). I have spent some time thinking about this, and I believe the argument is
weak. First, the middle point is not always insignificant since if it had been further
away from the line between the two other points, it would draw the regression line
upwards. Second, I believe (but I am not sure) that the argument is highly dependent on
there being only three observations - two of which are on a line. (Though, you would get
the same result with four - when three are on a line). Third, the problem is rather small
in the limit. Finally, one might well argue that the regression line in figure b really is
a better fit to the data than the line in figure a. Now, I am still uncertain about this
argument, and I am aware of the other justifications for OLS (Gauss Markov, BLUE etc).
Yet, I am left with this: Why should the weight of the far away observations be increased
through the process of squaring the distance?
Two more arguments are relevant. First, I admit that MAD would revise the regression
line in discrete steps and that it is not always unique (but I would argue that the
discreteness and lack of uniqueness is so small that it does not represent a significant
problem. Discreteness in small steps is approximately the same as a continuos line.
Multiple solutions within a small range is also a smaller problem than multiple solutions
withing a large range). Second, the reason why OLS can get away with squaring the
deviations, is the assumption that the errors are equally large both on the negative and
the positive side of the regression line. MAD, I think, is not as sensitive to this
assumption. (However, once again I am by no means certain about all this. Maybe some Monte
Carlo experiments could determine the relative merits of the MAD vs. OLS estimator. One
should then also compare the performance of the estimators when the assumptions fail i.e.
the performance of MAD and OLS when there is autocorrelation, multicollinearity,
heteroscedasticity e.t.c).
Conclusion
Econometrics by Wonnacott and Wonnacott is a good book which I can safely
recommend. As a textbook, however, it needs to be supplemented by chapters from other
textbooks dealing with new and important topics such as the modern approach to
autocorrelation, co-integration and several other recent developments. It also needs a
more systematic critique of econometric practice.
[Note for bibliographic reference: Melberg, Hans O. (1997), Visual econometrics - A
review of Wonnacott and Wonnacott, http://www.oocities.org/hmelberg/papers/970425b.htm]