Invité dans une conférence académique (Keynote speaker)
Année
2025
Abstract
The PAC-Bayesian theory provides tools to understand the accuracy of Bayes-inspired algorithms that learn probability distributions on parameters. This theory was initially developed by McAllester about 20 years ago, and applied successfully to various machine learning algorithms in various problems. Recently, it led to tight generalization bounds for deep neural networks, a task that could not be achieved by standard « worst-case » generalization bounds such as Vapnik-Chervonenkis bounds. In a first part, I will provide a brief introduction to PAC-Bayes bounds, explain the core ideas and the main applications. I will also provide an overview of the recent research trends. In a second time, I will discuss more theoretical aspects. In particular, I will highlight the application of PAC-Bayes bounds to derive minimax-optimal rates of convergence in classification and in regression, and the connection to mutual-information bounds.
ALQUIER, P. (2025). 2 lectures on PAC-Bayes bounds. Dans: Heidelberg-Paris Workshop on Mathematical Statistics. Heidelberg.