QA345 : Component-Wise Markov Chain Monte Carlo: Uniform and Geometric Ergodicity under Mixing and Composition
Thesis > Central Library of Shahrood University > Mathematical Sciences > MSc > 2016
Authors:
Fatemeh Khodabakhshi Palandi [Author], Negar Eghbal[Supervisor], Hossein Baghishani[Advisor]
Abstarct: Markov Chain Monte Carlo (MCMC) methods are among the popular sampling methods for approximating posterior distribution of complex models with high dimension. In situations where the dimension of posterior distribution is high or/and its components are weakly correlated, it is common practice to update the simulation one variable (or sub-block of variables) at a time, rather than conduct a single full-dimensional update; this is what is called component-wise updating. for example, Gibbs sampling and Metropolis-Hastings-within-Gibbs algorithms are component-wise algorithms. There are different strategies for combining component-wise updates. While these strategies can ease MCMC implementation and produce superior empirical performance compared to full-dimensional updates, the theoretical convergence properties of the associated Markov chains have received limited attention. We review conditions under which some component-wise Markov chains converge to the stationary distribution at a geometric rate. The particular attention is on the connections between the convergence rate of the various component-wise strategies. We assess the theoretical results of the thesis by two simulation examples. %In view of statistical Bayesian inference, all inferences obtain baxsed on the posterior distribution. In most functional situations, the posterior distribution is not as a package, and it approximated by a method. In MCMC method, only one variable (or a variable block) will update at any stage in contrast with updating all the variables simultaneously. It is called component-wise update. It involved Metropolis-Hastings algorithm and Gibbs sampler algorithm. After updating component-wise for all the variables (or blocks of variable), different strategies were used to integrate them, including composition or select a random sequence, as combined chains state. Besides the advantage of the components-wise updating methods, study on theoretical features of Convergence in obtained Markov chains and Self-invariant distribution, or posterior distribution is taken into consideration rarely. We propose conditions under which some Markov chains component-wise is convergence with a fixed distribution of geometric rate. Also, we attended a special attention to the relationship between convergence rates and various components-wise strategies. Finally, the results show the two real examples including hierarchical linear mixed model and maximum likelihood estimation for mixed models‎.
Keywords:
#Geometric ergodicity #uniform ergodicity #Markov chain #Monte Carlo #Gibbs sampler #Metropolis-within-Gibbs #random scan #convergence rate Link
Keeping place: Central Library of Shahrood University
Visitor: