# H(el)P(ful) filtering

# I.

Separating the cycle from the trend of a time series is a common activity for macroeconomists. The choice of how to detrend variables can have major effects on firms and the economy as a whole. For example, the “Credit-to-GDP gaps” by the BIS are used by financial market regulators to decide on how to adjust counter-cyclical capital buffers which could mean that banks have to issue more capital. Estimating a different cycle can therefore directly affect bank profitability.

The usual method to detrend macroeconomic variables is the Hodrick-Prescott (HP) filter. The HP filter is nice, because it’s easy to understand and implement. The choice of \(\lambda\) allows using outside knowledge to set a cycle length that we consider reasonable.

But along came James Hamilton who wrote a paper last year called “Why you should never use the Hodrick-Prescott filter” (published, voxeu, pdf). He criticizes that the HP filter has no theoretical foundation and that it doesn’t accomplish what we would like it to do.

Hamilton proposes a better way to detrend. First, you choose an appropriate cycle length. Then - for every point in time - the trend is the value you would have predicted one cycle ago with a linear regression. To appropriately model time series behavior and (as a nice side effect) capture seasonalities, you pick multiples of 4 for quarterly and of 12 for monthly frequencies.

In this post, I’ll show how to implement the new Hamilton detrending method and compare it with the HP filter. I’ll take one of the variables that Hamilton uses in his paper and get exactly the same values.

# II.

First, load some packages (see here how to use `FredR`

):

Insert your FRED API key below:

Get nonfarm employment data from Fred. We keep only end of quarter values to use the same sample as Hamilton (first quarter of 1947 to last quarter of 2016). And we transform the data the same way (“100 times the log of end-of-quarter values”):

We use the HP filter from the `mFilter`

package like this:

The dataframe then looks like this:

Get NBER recession dummies:

Plot the data against the HP trend:

So employment is an upwards trending variable and the HP filter (red line) nicely captures that trend.

Next, I implement the new procedure from Hamilton (2017). First, we’ll need some lags of our employment variable. For this, I like to use `data.table`

:

If the true data generating process was a random walk, the optimal prediction would just be its value two years ago. We can calculate this random walk version (\(y_{t} - y_{t-8}\)) right away:

We want to now regress the current level of the employment variable on lags 8 to 11.

Create the expression for the linear model:

Check it out:

Run the regression:

Add fitted values and calculate the cyclical part:

Create a new dataframe that pits the two cyclical components in tidy (“long”) fashion against it other:

Which looks like this:

Plot the new cycles:

This is the lower left panel of Hamilton’s Figure 5.

Let’s check out the moments of the extracted cycles:

As explained by Hamilton, the noise component has a positive mean, but the regression version is demeaned. The standard deviations exactly match those reported by Hamilton in Table 2.

Let’s compare the HP filtered cycles with the alternative cycles:

The two series have a correlation of 0.72.

# III.

I understand the criticism of the HP filter, but at least everyone knew that it was just a tool you used. You eyeballed your series and tried to extract reasonable trends.

With the new procedure, the smoothing parameter \(\lambda\) is gone, but instead we have to choose how many years of data to skip to estimate the trend. Hamilton recommends taking a cycle length of two years for business cycle data and five years for the more sluggish credit aggregates. Isn’t that a bit arbitrary, too?

Having to pick whole years as cycle lengths also makes the method quite coarse-grained, as the parameter “years” can only be set to positive integers. Another downside is that the new method truncates the beginning of your sample, because you use that data to estimate the trend. This is a nontrivial problem in macro, where time dimensions are often short.

The Hamilton detrending method has additional benefits, though: It’s backward-looking, while the HP-filter also uses data that’s in the future (but an alternative backward-looking version exists). Also, it adjusts for seasonality by default.

We can only benefit from a better understanding of the problems of the HP filter. And we can always compare results using both methods.

## References

Hamilton, James D. (2017). “Why you should never use the Hodrick-Prescott filter.” *Review of Economics and Statistics.* (doi)