Robust Principal Component Analysis with Side Information

Kai-Yang Chiang, Cho-Jui Hsieh, Inderjit Dhillon

Abstract:   The robust principal component analysis (robust PCA) problem has been considered in many machine learning applications, where the goal is to decompose the data matrix to a low rank part plus a sparse residual. While current approaches are developed by only considering the low rank plus sparse structure, in many applications, side information of row and/or column entities may also be given, and it is still unclear to what extent could such information help robust PCA. Thus, in this paper, we study the problem of robust PCA with side information, where both prior structure and features of entities are exploited for recovery. We propose a convex problem to incorporate side information in robust PCA and show that the low rank matrix can be exactly recovered via the proposed method under certain conditions. In particular, our guarantee suggests that a substantial amount of low rank matrices, which cannot be recovered by standard robust PCA, become recoverable by our proposed method. The result theoretically justifies the effectiveness of features in robust PCA. In addition, we conduct synthetic experiments as well as a real application on noisy image classification to show that our method also improves the performance in practice by exploiting side information.

Download: pdf


  • Robust Principal Component Analysis with Side Information (pdf)
    K. Chiang, C. Hsieh, I. Dhillon.
    In International Conference on Machine Learning (ICML), pp. 2291–2299, June 2016.