Year
2024
Abstract
The PAC-Bayesian theory provides tools to understand the accuracy of Bayes-inspired algorithms that learn probability distributions on parameters. This theory was initially developed by McAllester about 20 years ago, and applied successfully to various machine learning algorithms in various problems. Recently, it led to tight generalization bounds for deep neural networks, a task that could not be achieved by standard “worst-case” generalization bounds such as Vapnik-Chervonenkis bounds. In this talk, I will provide a brief introduction to PAC-Bayes bounds, and explain the core ideas of the theory. I will also provide an overview of the recent research directions. In particular, I will highlight the application of PAC-Bayes bounds to derive minimax-optimal rates of convergence in classification and in regression, and the connection to mutual-information bounds.
ALQUIER, P. (2024). PAC-Bayes bounds: understanding the generalization of Bayesian learning algorithms. Dans: Stochastics Seminar, Department of Mathematics, NUS. Singapore.