Wednesday, 24 February 2016

Long-Run Var Restrictions

The assumptions in a VAR approach require knowledge of how certain types of variables react instantaneously to certain types of shocks. This can cause a number of difficulties when we are dealing with non-sluggish variables, and economic theory, which focuses mainly on the long-run, grants us little guidance.
As such we use an alternative approach. We identify several long-run restrictions in our model, based upon economic theory, and use these top identify impulse response functions. For example, we might state that a positive aggregate demand shock has no impact on output in the long-run, and only serves to raise the price level.
Then the long-run effect of the shock on yt is the sum of its effects of ∆yt , ∆yt+1, ∆yt+2 and so on. Another way of saying this is that the long-run effect of the shock is the sum of its impulse response functions.
If we consider the multivariate response model where

Zt = BZt−1 + Ce
And Zt = (∆yt , ∆xt)

Then the impulse response function is given by C in period 1, BC in the next period, and BnC after n number of periods. Using a Taylor expansion, we can rewrite this as (1-B)-1C. This is the long-run response of the shock.
Blanchard-Quah (1989) used a two-variable VAR in the log-difference in GDP and the unemployment rate. Because the VAR is estimated to be stationary, both structural shocks have zero long-run effect on the unemployment rate. This assumption thus meant that of the two structural shocks only one of them could have a long-run effect on the level of output. This was labelled the “supply” shock while the shock that has no effect on long-run output was labelled the “demand” shock. Keynesians emphasize the importance of demand shocks while advocates of the Real Business Cycle approach, see supply shocks as being more important. The results of Blanchard-Quah implied that demand shocks were responsible for the vast majority of short-run fluctuations.

However, their assumption that neither supply nor demand shocks can change the unemployment rate in the long-run may not be correct. Galı’s paper applied a similar analysis but for a model that moved a bit closer to the debate about the Real Business Cycle, and its predictions for the labour market. RBC models assume technology shocks drive the business cycle. This they state, explains why hours worked are higher in booms than in recessions. In the VAR run by Galí, he used the log-difference of output per hour worked (labour productivity), and the log-difference of hours worked. The assumption about long-run responses meant that the supply shock (now called the “technology” shock) can affect productivity in the long-run, while the non-technology shock cannot. The model let the data determine the long-run effects of technology and non-technology shocks on hours worked.
It was found that non-technology shocks cause both output and productivity to rise in the short-run. There is evidence to suggest that short-run cyclical movements in productivity are not just due to technology shocks, and instead due to the various costs associated with the adjustment labour input. Rather than hire new labour in a boom, existing labour is worked a bit harder, while in recession, employed labour is under-utilized. Some have specified this VAR with some stationary transformation of labour input, so shocks have no long-run effects. Technology shocks cause productivity to go up but hours to go down. This would suggest that short-run output is demand-driven not supply-driven. This caused some controversy, as it goes against Real Business Cycle Theory.

In Macro, there are often several very important, but largely unobservable variables which are essential to our models. An example, would be potential output in the AS-AD model. However, although a variable isn’t directly observable, we may nevertheless be able to estimate how it behaves.  For example, if GDP were to increase significantly, but inflation remain constant, then we could say that there has been an increase in potential output.
However, potential output doesn’t vary much between quarters, and it is highly likely that there is a lot of fairly random noise in the quarterly fluctuations in inflation. There is however, probably a useful signal in the data as well. So we are dealing with a type of signal extraction problem. The best way to extract this is the Kalman filter.
This is similar to the instrumental variable approach in econometrics. We find an observable variable which we know to be correlated with our latent variable. The amount of weight you put on the observable variable when formulating an expectation for X depends on how correlated they are, and on their relative standard deviation. If the observable variable has a high standard deviation (so it’s a poor signal) then you don’t place much weight on it.

State-space models are a general class of linear time series models that mix together observable and unobservable variables. These models can be described using two equations. The first, known as the state or transition equation, describes how a set of unobservable state variables, St , evolve over time as follows:
St = FSt−1 + ut

The term ut can include either normally-distributed errors or perhaps zeros if the equation being described is an identity.
The second equation in a state-space model, which is known as the measurement equation, relates a set of observable variables, Zt, to the unobservable state variables.

Zt = HSt + vt

Again, the term vt can include either normally-distributed errors or perhaps zeros if the equation being described is an identity. We can’t observe St but suppose we could replace it by an observable unbiased guess based on information available up to time t − 1.
We call this guess St|t−1 and suppose its errors are normally distributed with a known covariance matrix. Then the observed variables could be written as

Zt = HSt|t−1 + vt + H`( St − St|t−1)

  Because St|t−1 is observable and the unobservable elements (vt and St − St|t−1) are normally distributed, this model can be estimated via maximum-likelihood methods.
A method for generating these unbiased guesses is known as the Kalman filter. It is an iterative method. Starting from one period’s estimates of the state variables, it uses the observable data for the next period to update these estimates. First we formulate an estimate of the state variable at time t given information at time t − 1.
St = FSt−1 + ut St|t−1 = FSt−1|t−1
This means that in period t − 1, the expected value for the observables in period t are

Zt|t−1 = HSt|t−1 = HFSt−1|t−1

Then in period t, when we observe Zt the question is how do we update our guesses for the state variable in light of the “news” in Zt − HFSt−1|t−1?

The Kalman filter is what is known as a one-sided filter. The estimates of states at time t are based solely on information available at time t. No data after period t is used to calculate estimates of the unobserved state variables.
This is a reasonable model for how someone might behave if they were learning about the state variables in real time. However, researchers have access to the full history of the data set, including all the observations after time t.
For this reason, economists generally estimate time-varying models using a method known as the Kalman smoother. This is a two-sided filter that uses data both before and after time t to compute expected values of the state variables at time t. Basically, you do the Kalman filter first and then work backwards from the final estimates further exploiting joint distribution properties.


No comments:

Post a Comment