Monte Carlo sampling: convergence, localization transition and optimality

Among random sampling methods, Markov Chain Monte Carlo algorithms are
foremost. From a combination of analytical and numerical approaches,
we study their convergence properties towards the steady state. We
show that the deviations from the target steady-state distribution
features a localization transition as a function of the characteristic
length of the attempted jumps defining the random walk in a Metropolis
scheme. This transition changes drastically the error which is
introduced by incomplete convergence. Remarkably, the localization
transition occurs for parameters that also provide the optimal Monte
Carlo convergence speed. We show that the relaxation of the error in
the localized regime has some similarities with relaxation in a greedy
Monte Carlo algorithm where jumps always  occur to lower energy sites
(zero temperature limit). In both cases relaxation is described by a
self-similar ansatz instead of being dominated by the  eigenmode with
the slowest relaxation rate as in diffusion problems.

Related preprints :