Many applications of machine learning involve the analysis of large data frames – matrices collecting heterogeneous measurements (binary, numerical, counts, etc.) across samples – with missing values. Low-rank models are popular in this framework for tasks such as visualization, clustering and missing value imputation. Yet, available methods with statistical guarantees and efficient optimization do not allow explicit modeling of main additive effects such as row and column, or covariate effects. In this paper, we introduce a low- rank interaction and sparse additive effects (LORIS) model which combines matrix regression on a dictionary and low-rank design, to estimate main effects and interactions simultaneously. We provide statistical guarantees in the form of upper bounds on the estimation error of both components. Then, we introduce a mixed coordinate gradient descent (MCGD) method which provably converges sub-linearly to an optimal solution and is computationally efficient for large scale data sets. We show on simulated and survey data that the method has a clear advantage over current practices.
ROBIN, G., WAI, H.T., JOSSE, J., KLOPP, O. and MOULINES, A. (2018). Low-Rank Interactions and Sparse Additive Effects Model for Large Data Frames. In: Advances in Neural Information Processing Systems 31 (NIPS 2018).