Compressed sensing deals with the problem of recovering a vector in a high-dimensional space from a lower-dimensional measurement, under the assumption that the vector is sparse (that is, it has relatively few non-zero coordinates). One of the best-known techniques to achieve such recovery is the $\ell_p$-norm minimization, and its properties are related to the geometry of the Banach spaces involved: theorems by Kashin-Temlyakov and Foucart-Pajor-Rauhut-Ullrich relate the stability of sparse vector recovery via $\ell_p$-minimization to the so-called Gelfand numbers of identity maps between finite-dimensional $\ell_p$-spaces.
In many practical situations the space of unknown vectors has in fact a matrix structure, a good example being the famous matrix completion problem (also known as the Netflix problem) where the unknown is a matrix and we are given a subset of its entries. In this case sparsity gets replaced by the more natural condition of having low rank, and the last few years have witnessed an explosion of work in this area. In this talk we present matrix analogues of the aforementioned results, relating the stability of low-rank matrix recovery via Schatten $p$-minimization to the Gelfand numbers of identity maps between finite-dimensional Schatten $p$-spaces.