Actes d’une conférence
Année
2021
Abstract
We tackle the problem of online optimization with a general, possibly unbounded, loss function. It is well known that when the loss is bounded, the exponentially weighted aggregation strategy (EWA) leads to a regret in √T after T steps. In this paper, we study a generalized aggregation strategy, where the weights no longer depend exponentially on the losses. Our strategy is based on Follow The Regularized Leader (FTRL): we minimize the expected losses plus a regularizer, that is here a ϕ-divergence. When the regularizer is the Kullback-Leibler divergence, we obtain EWA as a special case. Using alternative divergences enables unbounded losses, at the cost of a worst regret bound in some cases.
ALQUIER, P. (2021). Non-exponentially Weighted Aggregation: Regret Bounds for Unbounded Loss Functions. Dans: 38th International Conference on Machine Learning (ICML’21). Proceedings of Machine Learning Research.