Actes d’une conférence
Année
2017
Abstract
We consider the problem of transfer learning in an online setting. Different tasks are presented sequentially and processed by a within-task algorithm. We propose a lifelong learning strategy which refines the underlying data representation used by the within-task algorithm, thereby transferring information from one task to the next. We show that when the within-task algorithm comes with some regret bound, our strategy inherits this good property. Our bounds are in expectation for a general loss function, and uniform for a convex loss. We discuss applications to dictionary learning and finite set of predictors.
ALQUIER, P., MAI, T.T. et PONTIL, M. (2017). Regret Bounds for Lifelong Learning. Dans: 20th International Conference on Artificial Intelligence and Statistics (AIStat’17). Proceedings of Machine Learning Research.