Not available yet.
|Tight convex relaxations for sparse matrix factorization||05 NOV 2015||04:00pm - 04:45pm|
In this talk, I will consider statistical learning formulations where the parameter is a matrix that is the sum of a small number of sparse rank one (non-orthogonal) factors. Such formulations are relevant e.g. for social network modelling, subspace clustering, sparse bilinear regression and multi-factor sparse PCA analysis.
Based on an assumption that the sparsity of the factors is fixed and known, I will present a matrix norm which provides a tight although NP-hard convex relaxation of the learning problem.
I will discuss the sample complexity of learning the matrix in the rank one case and show that considering a computationally more expensive convex relaxation leads to an improvement of the sample complexity by an order of magnitude as compared with the usual convex regularization considered, like combinations of the \(\ell_1\)-norm and the trace norm.
Time permitting, I will describe an algorithm, relying on a rank-one sparse PCA oracle to solve the convex problems considered, and illustrate that, in practice, when state-of-the-art heuristic algorithms for rank one sparse PCA are used as surrogates for the oracle, our algorithm outperforms other existing methods.
(work in collaboration with Emile Richard and Jean-Philippe Vert)