In concept, it’s a Bayesian prior that carries less information than you actually have. In specifics, I don’t really know, because I don’t understand Bayesian statistics well enough to say. That’s one of the areas I’m trying to learn on my own right now (my department doesn’t do Bayesian stuff except a little bit of empirical Bayes for HLM, so far as I can tell).

**Digression: Bayesian vs. classical statistics**

For readers not familiar at all with Bayesian statistics, it’s a different philosophical and mathematical approach to dealing with uncertainty about the world. In classical, or frequentist, statistics, the statements we make about the world (such as, the mean height of women in the U.S. is 5’6″ +/- 1″ with 95% probability) are statements about the long run. In the long run, a 95% confidence interval generated through classical statistics will cover the actual value of the parameter of interest 95% of the time.

Bayesian statistics, on the other hand, directly characterizes uncertainty. It conforms better to our intuitive notions of probability. I’m writing a blog post about it right now, so I can understand it better, but the gist so far as I understand it is that with Bayesian statistics, we can incorporate our prior notions about the situation under study into the data analysis process then compute a posterior distribution that quantifies our uncertainty given both the prior information and the new information (the data). Bayesian approaches are sometimes seen as too subjective. Also, the fact that Bayesian computations often require simulation to complete makes statisticians comfortable with analytical, closed form approaches distinctly uncomfortable.

**Back to weakly informative priors**

I think this area — Bayesian statistics with the use of weakly informative priors — might provide a potential research direction. This is from a posting for two postdoctoral positions on Andrew Gelman’s blog:

The project addresses modeling and computational issues that stand in the way of fitting larger, more realistic models of variation in social science data, in particular the problem that (restricted) maximum likelihood estimation often yields estimates on the boundary, such as zero variances or perfect correlations. The proposed solution is to specify weakly informative prior distributions – that is, prior distributions that will affect inferences only when the data provide little information about the parameters. Existing software for maximum likelihood estimation can then be modified to perform Bayes posterior modal estimation, and these ideas can also be used in fully Bayesian computation. In either case, the goal is to be able to fit and understand more complex, nuanced models.

I understand most of this in concept, if not in specifics. I’m taking HLM right now and we may cover a little bit of Bayesian stuff, but I think I will be mostly on my own trying to learn it.

Here’s Gelman’s presentation on weakly informative priors (PDF).