Sunday, April 30, 2017

One Millionth Birthday...

Image result for 1 year birthday cake
 ...in event time.  It's true, yesterday No Hesitations passed 1,000,000 page views.  Totally humbling.  I am grateful for your interest and support.

Thursday, April 20, 2017

Automated Time-Series Forecasting at Google

Check out this piece on automated time-series forecasting at Google.  It's a fun and quick read. Several aspects are noteworthy.  

On the upside:

-- Forecast combination features prominently -- they combine forecasts from an ensemble of models.  

-- Uncertainty is acknowledged -- they produce interval forecasts, not just point forecasts.

On the downside:

-- There's little to their approach that wasn't well known and widely used in econometrics a quarter century ago (or more).  Might not something like Autobox, which has been around and evolving since the 1970's, do as well or better?

Friday, April 14, 2017

On Pseudo Out-of-Sample Model Selection

Great to see that Hirano and Wright (HW), "Forecasting with Model Uncertainty", finally came out in Econometrica. (Ungated working paper version here.)

HW make two key contributions. First, they characterize rigorously the source of the inefficiency in forecast model selection by pseudo out-of-sample methods (expanding-sample, split-sample, ...), adding invaluable precision to more intuitive discussions like Diebold (2015). (Ungated working paper version here.) Second, and very constructively, they show that certain simulation-based estimators (including bagging) can considerably reduce, if not completely eliminate, the inefficiency.


Abstract: We consider forecasting with uncertainty about the choice of predictor variables. The researcher wants to select a model, estimate the parameters, and use the parameter estimates for forecasting. We investigate the distributional properties of a number of different schemes for model choice and parameter estimation, including: in‐sample model selection using the Akaike information criterion; out‐of‐sample model selection; and splitting the data into subsamples for model selection and parameter estimation. Using a weak‐predictor local asymptotic scheme, we provide a representation result that facilitates comparison of the distributional properties of the procedures and their associated forecast risks. This representation isolates the source of inefficiency in some of these procedures. We develop a simulation procedure that improves the accuracy of the out‐of‐sample and split‐sample methods uniformly over the local parameter space. We also examine how bootstrap aggregation (bagging) affects the local asymptotic risk of the estimators and their associated forecasts. Numerically, we find that for many values of the local parameter, the out‐of‐sample and split‐sample schemes perform poorly if implemented in the conventional way. But they perform well, if implemented in conjunction with our risk‐reduction method or bagging.

Monday, April 10, 2017

BIg Data, Machine Learning, and the Macroeconomy

Coming soon at Bank of Norway:

CALL FOR PAPERS 
Big data, machine learning and the macroeconomy 
Norges Bank, Oslo, 2-3 October 2017 

Data, in both structured and unstructured form, are becoming easily available on an ever increasing scale. To find patterns and make predictions using such big data, machine learning techniques have proven to be extremely valuable in a wide variety of fields. This conference aims to gather researchers using machine learning and big data to answer challenges relevant for central banking. 

Examples of questions, and topics, of interest are: 

Forecasting applications and methods
-Can better predictive performance of key economic aggregates (GDP, inflation, etc.) be achieved by using alternative data sources? 
- Does the machine learning tool-kit add value to already well-established forecasting frameworks used at central banks? 

 Causal effects
- How can new sources of data and methods be used learn about the causal mechanism underlying economic fluctuations? 

Text as data
- Communication is at the heart of modern central banking. How does this affect markets? 
- How can textual data be linked to economic concepts like uncertainty, news, and sentiment? 

Confirmed keynote speakers are: 
- Victor Chernozhukov (MIT) 
- Matt Taddy (Microsoft, Chicago Booth) 

The conference will feature 10-12 papers. If you would like to present a paper, please send a draft or an extended abstract to mlconference@norges-bank.no by 31 July 2017. Authors of accepted papers will be notified by 15 August. For other questions regarding this conference, please send an e-mail to mlconference@norges-bank.no. Conference organizers are Vegard H. Larsen and Leif Anders Thorsrud.

13th Annual Real-Time Conference

Great news: The Bank of Spain will sponsor the 13th annual conference on real-time data analysis, methods, and applications in macroeconomics and finance, next October 19th and 20th , 2017, in its central headquarters in Madrid, c/ Alcalá, 48. 

The real-time conference has always been unique and valuable. I'm very happy to see the Bank of Spain confirming and promoting its continued vitality.

More information and call for papers here.

Topics include:

• Nowcasting, forecasting and real-time monitoring of macroeconomic and financial conditions.
• The use of real-time data in policy formulation and analysis.
• New real-time macroeconomic and financial databases.
• Real-time modeling and forecasting aspects of high-frequency financial data.
• Survey data, and its use in macro model analysis and evaluation.
• Evaluation of data revision and real-time forecasts, including point forecasts, probability forecasts, density forecasts, risk assessments and decompositions
.

Monday, April 3, 2017

The Latest on the "File Drawer Problem"

The term "file drawer problem" was coined long ago. It refers to the bias in published empirical studies toward "large", or "significant", or "good" estimates. That is, "small"/"insignificant"/"bad" estimates remain unpublished, in file drawers (or, in modern times, on hard drives). Correcting the bias is a tough nut to crack, since little is known about the nature or number of unpublished studies. For the latest, together with references to the relevant earlier literature, see the interesting new NBER working paper, IDENTIFICATION OF AND CORRECTION FOR PUBLICATION BIAS, by Isaiah AndrewsMaximilian Kasy. There's an ungated version and appendix here, and a nice set of slides here.

Abstract: Some empirical results are more likely to be published than others. Such selective publication leads to biased estimators and distorted inference. This paper proposes two approaches for identifying the conditional probability of publication as a function of a study's results, the first based on systematic replication studies and the second based on meta-studies. For known conditional publication probabilities, we propose median-unbiased estimators and associated confidence sets that correct for selective publication. We apply our methods to recent large-scale replication studies in experimental economics and psychology, and to meta-studies of the effects of minimum wages and de-worming programs.