Convergence Properties of Gibbs Samplers for Bayesian Probit Regression with Proper Priors

Wed., Feb. 15
5:10 pm, FLO 100
Refreshments at 5:00 pm
The Bayesian probit regression model (Albert and Chib (1993)) is popular and widely used for binary regression. While the improper flat prior for the regression coefficients is an appropriate choice in the absence of any prior information, a proper normal prior is desirable when prior information is available or in modern high dimensional settings where the number of coefficients (p) is greater than the sample size (n). For both choices of priors, the resulting posterior density is intractable and a Data Dugmentation (DA) Markov chain is used to generate approximate samples from the posterior distribution. Establishing geometric ergodicity for this DA Markov chain is important as it provides theoretical guarantees for constructing standard errors for Markov chain based estimates of posterior quantities.

In this talk, we first show that in case of proper normal priors, the DA Markov chain is geometrically ergodic *for all* choices of the design matrix X, n and p (unlike the improper prior case, where np and another condition on X are required for posterior propriety itself). We also derive sufficient conditions under which the DA Markov chain is trace-class, i.e., the eigenvalues of the corresponding operator are summable. In particular, this allows us to conclude that the Haar PX-DA sandwich algorithm (obtained by inserting an inexpensive extra step in between the two steps of the DA algorithm) is strictly better than the DA algorithm in an appropriate sense.

Back