I. Introduction

1. The Basic Problem

Views of the causes of inflation vary between economists, from the monetarist view that inflation results from increases in the money supply to the Keynesian view that excess demand is the key. Whilst not aiming to find the definitive model for inflation, as that would be an exhaustive process, I hope at least to find a model comprising factors from the key theories which fits the data and allows me to carry out analysis of the components.

It is worth noting that this is, first and foremost, an economics project and so, where there is econometric modelling or analysis, it is only to lay down the foundations from economics. Computing the models is an obvious necessity in order to provide a starting point for the analysis of the findings. The various tests of models are to ensure econometric accuracy, and are in no way related to the economic analysis.

I expect the general model to be of the form:

current inflation = core inflation + demand shocks + supply shocks (1)

This model is based partly on the Philips curve1, p = pe + f(Yt-1 - Y*)/Y*, which states inflation is caused by the level of expected inflation and demand shock, and also on a model suggested by Hall and Taylor2 of p = fYt-1 + pe + Z, where Z is a price shock and Yt-1 is a measure of market conditions.

There was a choice of variables to use as proxies for the regressors. There are a number of theories behind the core, or expected, inflation term, but I will consider two of the most prominent factors. The method of modelling depends on whether consumers have forward-looking forecasts or backward-looking behaviour. Forward- looking forecasts, as the name suggests, are predictions of future price and wage increases. Consequently, the level of future inflation forecasted is part of expected inflation. With full information, the forecasts will match rational expectations. Backward looking behaviour considers previous levels and allows for inertia, whereby wages and prices cannot be changed overnight. I decided to base my model on the backward-looking theory which, in my opinion, is more realistic because, in its simplest form of p* = pt-1 implies that consumers follow a rule-of-thumb, using past knowledge to make current decisions.

I will model core inflation using a weighted average over a number of previous periods. Given the lasting effects of inflation, both as a real effect in the economy and also in people's minds, I will allow the core term to have up to two years' worth of lags. This approach is used because of its application to economic explanation but the actual number of previous inflation terms will be decided using econometric tests, as the model must satisfy some basic modelling assumptions.

As a proxy for demand shocks I will consider either Gross Domestic Product (GDP) deviations from a trend or capacity utilisation. GDP deviations are calculated by regressing the level against a time trend and computing the residuals (figure 1 below). Where the residual is positive, the level of GDP is above the trend and so we interpret this as a positive demand shock. The use of capital utilisation is less straightforward. The data can be obtained directly or calculated as residuals from the deviation from the trend of, say, industrial production. Capacity utilisation can account for non-price measures of rationing as a capacity shortage, depicted by a high level of utilisation, would force consumers to switch to a more expensive source, such as imports. Therefore, a positive demand shock would be associated with high capacity utilisation. Because of its easier interpretation, I will use GDP deviations as the proxy for demand shocks.

For supply shocks I have chosen two areas to represent. Firstly, as a result of the policy instruments in effect for some of my sample period, an incomes policy dummy variable must be included. There are not really any alternative measures of this, only an individual interpretation of when the policy was in effect, as it appeared under a number of guises. The other factor I will use is a measure of the international effect on UK inflation, through the role of imports. Clearly, an obvious variable to consider is the relative import price with straightforward interpretations. Another possibility is a measure of the real exchange rate. A positive supply shock, which causes a rise in the price of imports, would cause a fall in the real exchange rate. However, I have again gone for the more straightforward measure, choosing relative import price as the second proxy of supply shocks.

As well as a general model for the entire sample, I intend to calculate models for subsections of the data, in order to discover varying effects of the variables. I will split the data into three 10 year blocks, and not into decades, owing to the distribution of the sample period.

A more in-depth method is to look at the actual data to determine if there are any structural breaks, where the inflation rate suddenly changes by a noticeable amount. If any such periods exist, a model for each one could then be calculated. Also, I will examine the period of the OPEC3 oil crisis to discern differences from a normal period.

2. Data Collection

In my collection of the data I made extensive use of computer facilities. The data for both proxies of demand shocks were obtained from electronic sources. The level of GDP was obtained from the Office of National Statistics' Navidata and then the residuals, which represent GDP deviations, were calculated using PC-GIVE4. The alternate proxy of capacity utilisation was obtained from the Organisation for Economic Co-operation and Development's (OECD) database and the values used directly.

For the supply side shocks, the relative price of imports was calculated using data and formulae obtained from the Data Appendices: United Kingdom, Economica 1998 (With Supplement). The expression for relative price of imports is log(Pm/P), where Pm is the import price index for the UK and P is the output price index. Data and necessary formulae are obtained from two sources: the Office for National Statistics' Economic Trends Annual Supplement (ETAS) and the Central Statistics Office's Blue Book, United Kingdom National Accounts (BB).

The incomes policy variable was the most difficult to collect, as there was not one definite source of dates for its presence. In fact, there was disagreement between authors as to when the policy was actually in operation. Some authors took all forms of incomes policy, including informal policies. Others allowed only officially announced policies, therefore I used a range of sources to find a consensus view.

The figures for inflation and core inflation were obtained from the Bank of England Public Enquires Group, who provided monthly data for the period 1925 to 1996, from which I was able to calculate quarterly data for the period I required. Following the collection of the data, the sample period I had was from 1955Q1 to 1983Q4.

3. Statistical Techniques

To understand the modelling process used in this paper, firstly consider a simple regression, where one variable depends only on another, such as C=f(Y). Expanding on the basic theory, we have:
yi = a + bxi + ui

Regression provides a "best fit", such as "Actual value = Predicted value + Residuals".

The model's unknown parameters can be estimated using Ordinary Least Squares (OLS), which minimises S(a,b) = Si(yi - (a + bxi))2. The first order conditions give us the estimators:
b = Si(xi - x)(yi - y) = sample covariance
Si(xi - x)2 sample variance
a = y - bx

However, my data is not this straightforward, as it requires multiple regression techniques, which still adhere to much of the rules laid out. The model under this theory is:
yi = a + Sj= bjxji + ui

where we require estimates of (a,b1,...,bk), which can be obtained using a similar procedure as before, yielding:

a = y - Sj=bjxj

bj = Si=1n xjiyi Si=1n xji2

As well as having a number of regressors, my data set also features a time element, so we have a finite distributed lag model of the form:

yt = a + b0xt + b1xt-1 + ... + bLxt-L + ut = a + Sj=0Lbjxt-j + ut

The lag length, L, can be determined by using either the "simple to general" or the "general to simple" approach.

Both methods involve looking at the t-ratio of the last lag of each model, to see if it is significant or insignificant. The t-ratio is calculated by dividing the coefficient of the term by its standard deviation.

The simple-to-general approach starts with a simplified model, usually without any lags and adds more lags until the last added term has an insignificant t-ratio. For general-to-simple, which is the process I have chosen to use, we start with a model with many lags in and remove lags until the t-ratio is significant.

For the incomes policy, dummy variables will be used, which are a qualitative factor, used especially for seasonal effects or, in my case, to indicate the presence of the policy in a certain quarter. If Djt is the terminology, Djt = 1 if incomes policy is present in period j and zero otherwise.

Once the model is determined, the specification needs to be checked for errors, most notably autocorrelation, heteroskedasticity and omitted variables. For the more basic regression equations, the error term is assumed constant. For more advanced applications, this assumption is relaxed. Furthermore, we can assume that the errors are independent. This can lead to problems of autocorrelation, in which the error terms are correlated with previous errors, usually in time series data, such as the inflation data in this project. The test used is the Lagrange Multiplier test, developed by Breusch5 and Godfrey6, with a null hypothesis of no autocorrelation, where rejection occurs if the test statistic is significantly high7. Depending on the type of autocorrelation, the point estimators may still be unbiased8 and consistent but inefficient, or they may be biased and inconsistent. Either way, the t-tests would be invalid, owing to invalid standard errors.

Instead of considering the actual value of the error term, we can also consider the variance. The homoskedastic assumption is that the variance is constant. Where the variance is not constant, we have heteroskedasticity. The test is based on White9, and involves an auxiliary regression of estimated u2t on the original regressors xit and all their squares. The null is unconditional homoskedasticity, and the alternate is that the variance depends on the regressors in some way. Even with heteroskedasticity, the point estimators are still unbiased and consistent but suffer from inefficiency, leading to incorrect standard errors and hence invalid t-tests.

Ramsey's RESET test10 is to check that the model is correctly specified, with no omitted variables. The test works by adding the estimated y2 and testing the significance. Failure to detect incorrect functional form leads to biased and inconsistent point estimators causing t-tests and confidence intervals to be invalid.

In a later model I will look for the presence of a structural break, which involves a Chow11 test, and has the following procedure. The sample is divided into two subsamples, denoted by S1 and S2. Let S1 contain n1 observations and S2 contain n2 = n - n1 observations. The unrestricted model of the alternative hypothesis is written as

yi = a + Sj bj xji + ui , ui NID(0, s2) if i lies in S1

and

yi = a+ + Sj bj+ xji + ui , ui NID(0, s2) if i lies in S2

so that changes in regression coefficients are permitted.

The null hypothesis of constant coefficients consists of the (k+1) restrictions of

H0: a = a+ and bj = bj+ , j = 1,...,k

Since n1 > (k+1) and n2 > (k+1), the process is more straightforward. Let RSS denote the residual sum of squares for the OLS regression. H0 can be tested using the F statistic

F = RSS - (RSS1 + RSS2 ) n-2k-2
RSS1 + RSS2 k+1

which is F(k+1, n-2k-2) under H0, with large values indicting the data inconsistency of H0, and hence the possible existence of a structural break.

All modelling and testing procedures are carried out using PC-GIVE.

Notes

1. A. W. Phillips, "The Relationship between the Unemployment Rate and the Rate of Change in Money Wage Rate in the United Kingdom, 1861 - 1957". Economica, Nov. 1957.
2. R. E. Hall & J. B. Taylor, Macroeconomics. Fifth Edition.
3. Organisation of Petroleum Exporting Countries. A body of 11 countries which export large quantities of crude oil, with an agreement to limit supply and hence raise oil prices, which had a particularly severe effect in the 1970s.
4. PC-GIVE 8.00. Written by Jurgen A. Doornik and David F. Hendry. Copyright D. F. Hendry
5. T. S. Breusch, "Testing for Autocorrelation in Dynamic Linear Models". Australian Economic Papers, Vol. 17, 1978.
6. L. G. Godfrey, "Testing for Higher Order Serial Correlation in Regression Equations when the Regressors Include Lagged Dependent Variables". Econometrica, Vol. 46, 1978.
7. A null hypothesis is some hypothesis about a parameter which will be believed unless sufficient contrary evidence is produced.
8. An estimator is said to be unbiased if and only if the bias is equal to zero. That is, the estimator, on average, gets the correct value, and there is no systematic tendency away from the true value.
9. H. White, "A Heteroscedasticity Consistent Covariance Matrix Estimator and a Direct Test of Heteroskedasticity". Econometrica, Vol. 48, 1980.
10. J. B. Ramsey, "Tests for Specification Errors in Classical Linear Least Squares Regression Analysis". Journal of the Royal Statistical Society, Series B, Vol. 31, 1969.
11. G. C. Chow, "Tests of Equality Between Subsets of Coefficients in Two Linear Regression Models". Econometrica, Vol. 48, 1980.

Go to II. Estimation and Results.

Return to project index.


Titlepage


Garry Swann

Email: gjs@swann39.freeserve.co.uk

email me