## Tuesday, July 25, 2017

### Time-Series Regression Discontinuity

I'll have something to say in next week's post. Meanwhile check out the interesting new paper, "Regression Discontinuity in Time: Considerations for Empirical Applications", by Catherine Hausman and David S. Rapson, NBER Working Paper No. 23602, July 2017. (Ungated version here.)

## Sunday, July 23, 2017

### On the Origin of "Frequentist" Statistics

Efron and Hastie note that the "frequentist" term "seems to have been suggested by Neyman as a statistical analogue of Richard von Mises' frequentist theory of probability, the connection being made explicit in his 1977 paper, 'Frequentist Probability and Frequentist Statistics'". It strikes me that I may have always subconsciously assumed that the term originated with one or another Bayesian, in an attempt to steer toward something more neutral than "classical", which could be interpreted as "canonical" or "foundational" or "the first and best". Quite fascinating that the ultimate "classical" statistician, Neyman, seems to have initiated the switch to "frequentist".

## Thursday, July 13, 2017

## Sunday, July 9, 2017

### On the Identification of Network Connectedness

I want to clarify an aspect of the Diebold-Yilmaz framework (e.g., here or here). It is simply a method for summarizing and visualizing dynamic network connectedness, based on a variance decomposition matrix. The variance decomposition is not a part of our technology; rather, it is the key

For certain reasons (e.g., comparatively easy extension to high dimensions) Yilmaz and I generally use a vector-autoregressive model and Koop-Pesaran-Shin "generalized identification". Again, however, if you don't find that appealing, you can use whatever model and identification scheme you want. As long as you can supply a credible / defensible variance decomposition matrix, the network summarization / visualization technology can then take over.

*input*to our technology. Calculation of a variance decomposition of course requires an identified model. We have nothing new to say about that; numerous models/identifications have appeared over the years, and it's your choice (but you will of course have to defend your choice).For certain reasons (e.g., comparatively easy extension to high dimensions) Yilmaz and I generally use a vector-autoregressive model and Koop-Pesaran-Shin "generalized identification". Again, however, if you don't find that appealing, you can use whatever model and identification scheme you want. As long as you can supply a credible / defensible variance decomposition matrix, the network summarization / visualization technology can then take over.

## Monday, July 3, 2017

### Bayes, Jeffreys, MCMC, Statistics, and Econometrics

In Ch. 3 of their brilliant book, Efron and Tibshirani (ET) assert that:

Jeffreys’ brand of Bayesianism [i.e., "uninformative" Jeffreys priors] had a dubious reputation among Bayesians in the period 1950-1990, with preference going to subjective analysis of the type advocated by Savage and de Finetti. The introduction of Markov chain Monte Carlo methodology was the kind of technological innovation that changes philosophies. MCMC ... being very well suited to Jeffreys-style analysis of Big Data problems, moved Bayesian statistics out of the textbooks and into the world of computer-age applications.Interestingly, the situation in econometrics strikes me as rather the opposite. Pre-MCMC, much of the leading work emphasized Jeffreys priors (RIP Arnold Zellner), whereas post-MCMC I see uniform at best (still hardly uninformative as is well known and as noted by ET), and often Gaussian or Wishart or whatever. MCMC of course still came to dominate modern Bayesian econometrics, but for a different reason: It facilitates calculation of the

*marginal*posteriors of interest, in contrast to the*conditional*posteriors of old-style analytical calculations. (In an obvious notation and for an obvious normal-gamma regression problem, for example, one wants posterior(beta), not posterior(beta | sigma).) So MCMC has moved us toward marginal posteriors, but moved us away from uninformative priors.
Subscribe to:
Posts (Atom)