Année
2018
Abstract
PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution ρ
to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution π
. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as hostile data). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csiszár’s f-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.
ALQUIER, P. et GUEDJ, B. (2018). Simpler PAC-Bayesian bounds for hostile data. Machine Learning, 107(5), pp. 887-902.