Wednesday, June 7, 2017

Structural Change and Big Data

Recall the tall-wide-dense (T, K, m) Big Data taxonomy.  One might naively assert that tall data (big time dimension, T) are not really a part of the Big Data phenomenon, insofar as T has not started growing more quickly in recent years.  But a more sophisticated perspective on the "size" of T is whether it is big enough to make structural change a potentially serious concern.  And structural change is a serious concern, routinely, in time-series econometrics.  Hence structural change, in a sense, produces Big Data through the T channel.

Saturday, May 27, 2017

SoFiE 2017 New York

If you haven't yet been to the Society for Financial Econometrics (SoFiE) annual meeting, now's the time.  They're pulling out all the stops for the 10th anniversary at NYU Stern, June 21-23, 2017.  There will be a good mix of financial econometrics and empirical finance (invited speakers here; full program here). The "pre-conference" will also continue, this year June 20, with presentations by junior scholars (new/recent Ph.D.'s) and discussions by senior scholars. Lots of information here. See you there!

Monday, May 22, 2017

Big Data in Econometric Modeling

Here's a speakers' photo from last week's Penn conference, Big Data in Dynamic Predictive Econometric Modeling.  Click through to find the program, copies of papers and slides, a participant list, and a few more photos.  A good and productive time was had by all!


Monday, May 15, 2017

Statistics in the Computer Age

Efron and Tibshirani's Computer Age Statistical Inference (CASI) is about as good as it gets. Just read it. (Yes, I generally gush about most work in the Efron, Hastie, Tibshirani, Brieman, Friedman, et al. tradition.  But there's good reason for that.)  As with the earlier Hastie-Tibshirani Springer-published blockbusters (e.g., here), the CASI publisher (Cambridge) has allowed ungated posting of the pdf (here).  Hats off to Efron, Tibshirani, Springer, and Cambridge.

Monday, May 8, 2017

Replicating Anomalies

I blogged a few weeks ago on "the file drawer problem".  In that vein, check out the interesting new paper below. I like their term "p-hacking". 

Random thought 1:  
Note that reverse p-hacking can also occur, when an author wants low p-values.  In the study below, for example, the deck could be stacked with all sorts of dubious/spurious "anomaly variables" that no one ever took seriously.  Then of course a very large number would wind up with low p-values.  I am not suggesting that the study below is guilty of this; rather, I simply had never thought about reverse p-hacking before, and this paper led me to think of the possibility, so I'm relaying the thought.

Related random thought 2:  
It would be interesting to compare anomalies published in "top journals" and "non-top journals" to see whether the top journals are more guilty or less guilty of p-hacking.  I can think of competing factors that could tip it either way!

Replicating Anomalies
by Kewei Hou, Chen Xue, Lu Zhang - NBER Working Paper #23394
Abstract:
The anomalies literature is infested with widespread p-hacking. We replicate the entire anomalies literature in finance and accounting by compiling a largest-to-date data library that contains 447 anomaly variables. With microcaps alleviated via New York Stock Exchange breakpoints and value-weighted returns, 286 anomalies (64%) including 95 out of 102 liquidity variables (93%) are insignificant at the conventional 5% level. Imposing the cutoff t-value of three raises the number of insignificance to 380 (85%). Even for the 161 significant anomalies, their magnitudes are often much lower than originally reported. Out of the 161, the q-factor model leaves 115 alphas insignificant (150 with t < 3). In all, capital markets are more efficient than previously recognized.  


Thursday, May 4, 2017

Sunday, April 30, 2017

One Millionth Birthday...

Image result for 1 year birthday cake
 ...in event time.  It's true, yesterday No Hesitations passed 1,000,000 page views.  Totally humbling.  I am grateful for your interest and support.

Thursday, April 20, 2017

Automated Time-Series Forecasting at Google

Check out this piece on automated time-series forecasting at Google.  It's a fun and quick read. Several aspects are noteworthy.  

On the upside:

-- Forecast combination features prominently -- they combine forecasts from an ensemble of models.  

-- Uncertainty is acknowledged -- they produce interval forecasts, not just point forecasts.

On the downside:

-- There's little to their approach that wasn't well known and widely used in econometrics a quarter century ago (or more).  Might not something like Autobox, which has been around and evolving since the 1970's, do as well or better?

Friday, April 14, 2017

On Pseudo Out-of-Sample Model Selection

Great to see that Hirano and Wright (HW), "Forecasting with Model Uncertainty", finally came out in Econometrica. (Ungated working paper version here.)

HW make two key contributions. First, they characterize rigorously the source of the inefficiency in forecast model selection by pseudo out-of-sample methods (expanding-sample, split-sample, ...), adding invaluable precision to more intuitive discussions like Diebold (2015). (Ungated working paper version here.) Second, and very constructively, they show that certain simulation-based estimators (including bagging) can considerably reduce, if not completely eliminate, the inefficiency.


Abstract: We consider forecasting with uncertainty about the choice of predictor variables. The researcher wants to select a model, estimate the parameters, and use the parameter estimates for forecasting. We investigate the distributional properties of a number of different schemes for model choice and parameter estimation, including: in‐sample model selection using the Akaike information criterion; out‐of‐sample model selection; and splitting the data into subsamples for model selection and parameter estimation. Using a weak‐predictor local asymptotic scheme, we provide a representation result that facilitates comparison of the distributional properties of the procedures and their associated forecast risks. This representation isolates the source of inefficiency in some of these procedures. We develop a simulation procedure that improves the accuracy of the out‐of‐sample and split‐sample methods uniformly over the local parameter space. We also examine how bootstrap aggregation (bagging) affects the local asymptotic risk of the estimators and their associated forecasts. Numerically, we find that for many values of the local parameter, the out‐of‐sample and split‐sample schemes perform poorly if implemented in the conventional way. But they perform well, if implemented in conjunction with our risk‐reduction method or bagging.