## Monday, November 20, 2017

### More on Path Forecasts

I blogged on path forecasts yesterday.  A reader just forwarded this interesting paper, of which I was unaware.  Lots of ideas and up-to-date references.

## Thursday, November 16, 2017

### Forecasting Path Averages

Consider two standard types of $$h$$-step forecast:

(a).  $$h$$-step forecast, $$y_{t+h,t}$$, of $$y_{t+h}$$

(b).  $$h$$-step path forecast, $$p_{t+h,t}$$, of $$p_{t+h} = \{ y_{t+1}, y_{t+2}, ..., y_{t+h} \}$$.

Clive Granger used to emphasize the distinction between (a) and (b).

As regards path forecasts, lately there's been some focus not on forecasting the entire path $$p_{t+h}$$, but rather on forecasting the path average:

(c).  $$h$$-step path average forecast, $$a_{t+h,t}$$, of $$a_{t+h} = 1/h [y_{t+1} + y_{t+2} + ... + y_{t+h}]$$

The leading case is forecasting "average growth", as in Mueller and Waston (2016).

Forecasting path averages (c) never resonated thoroughly with me.  After all, (b) is sufficient for (c), but not conversely -- the average is just one aspect of the path, and additional aspects (overall shape, etc.) might be of interest.

Then, listening to Ken West's FRB SL talk, my eyes opened.  Of course the path average is insufficient for the whole path, but it's surely the most important aspect of the path -- if you could know just one thing about the path, you'd almost surely ask for the average.  Moreover -- and this is important -- it might be much easier to provide credible point, interval, and density forecasts of $$a_{t+h}$$ than of $$p_{t+h}$$.

So I still prefer full path forecasts when feasible/credible, but I'm now much more appreciative of path averages.

## Wednesday, November 15, 2017

### FRB St. Louis Forecasting Conference

Got back a couple days ago.  Great lineup.  Wonderful to see such sharp focus.  Many thanks to FRBSL and the organizers (Domenico Giannone, George Kapetanios, and Mike McCracken).  I'll hopefully blog on one or two of the papers shortly.  Meanwhile, the program is here.

## Wednesday, November 8, 2017

### Artificial Intelligence, Machine Learning, and Productivity

As Bob Solow famously quipped, "You can see the computer age everywhere but in the productivity statistics".  That was in 1987.  The new "Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics," NBER w.p. 24001, by Brynjolfsson, Rock, and Syverson, brings us up to 2017.  Still a puzzle.  Fascinating.  Ungated version here.

## Sunday, November 5, 2017

### Regression on Term Structures

An important insight regarding use of dynamic Nelson Siegel (DNS) and related term-structure modeling strategies (see here and here) is that they facilitate regression on an entire term structure.  Regressing something on a curve might initially sound strange, or ill-posed.  The insight, of course, is that DNS distills curves into level, slope, and curvature factors; hence if you know the factors, you know the whole curve.  And those factors can be estimated and included in regressions, effectively enabling regression on a curve.

In a stimulating new paper, “The Time-Varying Effects of Conventional and Unconventional Monetary Policy: Results from a New Identification Procedure”, Atsushi Inoue and Barbara Rossi put that insight to very good use. They use DNS yield curve factors to explore the effects of monetary policy during the Great Recession.  That monetary policy is often dubbed "unconventional" insofar as it involved the entire yield curve, not just a very short "policy rate".

I recently saw Atsushi present it at NBER-NSF and Barbara present it at Penn's econometrics seminar.  It was posted today, here.