The task of reconstructing a matrix given a sample of observed entries is known
as the matrix completion problem. It arises in a wide range of problems, including
recommender systems, collaborative filtering, dimensionality reduction,
image processing, quantum physics or multi-class classification to name a few.
Most works have focused on recovering an unknown real-valued low-rank matrix
from randomly sub-sampling its entries. Here, we investigate the case where
the observations take a finite number of values, corresponding for examples to
ratings in recommender systems or labels in multi-class classification. We also
consider a general sampling scheme (not necessarily uniform) over the matrix
entries. The performance of a nuclear-norm penalized estimator is analyzed theoretically.
More precisely, we derive bounds for the Kullback-Leibler divergence
between the true and estimated distributions. In practice, we have also proposed
an efficient algorithm based on lifted coordinate gradient descent in order to tackle
potentially high dimensional settings.
KLOPP, O., LAFOND, J., MOULINES, E. et SALMON, J. (2014). Probabilistic low-rank matrix completion on finite alphabets. Dans: NIPS. Montréal: Neural Information Processing Systems.