Eric Evert

Natural generalizations of convex sets and matrix decompositions with surprising consequences 

Abstract: This talk discusses generalizations of convex sets to (dimension) free convex sets and of matrix decompositions to tensor decompositions. Moreover, we investigate the many surprising phenomena that occur in these settings.

Free convex sets extended classical convex sets to include $g$-tuples of symmetric $n \times n$ matrices $\mathcal{X}=(\mathbf{X}_1,\mathbf{X}_2,\dots,\mathbf{X}_g)$ of all sizes $n$. For example, the unit disk in $\mathbb{R}^2$ can be generalized to the dimension free setting by considering all pairs of symmetric matrices $\mathbf{X}_1$ and $\mathbf{X}_2$ such that $\mathbf{I}-\mathbf{X}_1^2-\mathbf{X}_2^2$ is positive semidefinite. The resulting set contains pairs of matrices of all sizes and is closed under a rich class of convex combinations called free convex combinations. An exciting feature of free convex combinations is that they enable combinations of matrix tuples of different sizes. This unifies the geometry of free convex sets across the various size of matrices contained in the set.

Mirroring the classical setting, extreme points are central to understanding free convex sets. A fundamental question is to identify a class of extreme points that is minimal with respect to recovering a free convex set via free convex combinations. We settle this question for projections of free convex sets defined by a matrix polynomial inequality, and we give an upper bound on the \emph{sum of the sizes} of the extreme points required. In addition, we explore connections between extreme points of free convex sets and problems in quantum information, such as incompatibility of quantum observables.

For tensors, we study the decomposition of a tensor into a minimal sum of rank-1 terms. Strikingly, this decomposition is generically unique for low-rank tensors. As a consequence, low-rank tensor decomposition plays a major role in data analysis and signal processing by enabling unique recovery of underlying factors. However, it is well-known that the low-rank approximation problem for tensors is ill-posed in general. This talk switches to the opposite point of view and develops mathematical guarantees that show that low-rank tensor decomposition is well-posed unless the tensor is “too far” from the set of low-rank tensors. This mathematically justifies the applied perspective that tensor decompositions are often well-behaved and reliable tools in practice.