Metropolis-adjusted Langevin algorithm (MALA)
Continuous time
See log concave distributions for a family of distributions where this works especially well because implcit (more nearly continuous-time exact) solutions are available Hodgkinson, Salomone, and Roosta (2019).
Left-field, Max Raginsky, Sampling Using Diffusion Processes, from Langevin to Schrödinger:
the Langevin process gives only approximate samples from \(\mu\). I would like to discuss an alternative approach that uses diffusion processes to obtain exact samples in finite time. This approach is based on ideas that appeared in two papers from the 1930s by Erwin Schrödinger in the context of physics, and is now referred to as the Schrödinger bridge problem.
References
Brosse, Moulines, and Durmus. 2018.
“The Promises and Pitfalls of Stochastic Gradient Langevin Dynamics.” In
Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS’18.
Garbuno-Inigo, Hoffmann, Li, et al. 2020.
“Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler.” SIAM Journal on Applied Dynamical Systems.
Girolami, and Calderhead. 2011.
“Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Grenander, and Miller. 1994.
“Representations of Knowledge in Complex Systems.” Journal of the Royal Statistical Society: Series B (Methodological).
Hodgkinson, Salomone, and Roosta. 2019.
“Implicit Langevin Algorithms for Sampling From Log-Concave Densities.” arXiv:1903.12322 [Cs, Stat].
Jolicoeur-Martineau, Piché-Taillefer, Mitliagkas, et al. 2022.
“Adversarial Score Matching and Improved Sampling for Image Generation.” In.
Liu, Zhuo, Cheng, et al. 2019.
“Understanding and Accelerating Particle-Based Variational Inference.” In
Proceedings of the 36th International Conference on Machine Learning.
Shang, Zhu, Leimkuhler, et al. 2015.
“Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling.” In
Advances in Neural Information Processing Systems. NIPS’15.
Song, and Ermon. 2020a.
“Generative Modeling by Estimating Gradients of the Data Distribution.” In
Advances In Neural Information Processing Systems.
———. 2020b.
“Improved Techniques for Training Score-Based Generative Models.” In
Advances In Neural Information Processing Systems.
Welling, and Teh. 2011.
“Bayesian Learning via Stochastic Gradient Langevin Dynamics.” In
Proceedings of the 28th International Conference on International Conference on Machine Learning. ICML’11.
Xifara, Sherlock, Livingstone, et al. 2014.
“Langevin Diffusions and the Metropolis-Adjusted Langevin Algorithm.” Statistics & Probability Letters.
Zhang, Zhang, Carin, et al. 2020.
“Stochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory.” In
International Conference on Artificial Intelligence and Statistics.