Files

Abstract

We study convex optimization problems that feature low-rank matrix solutions. In such scenarios, non-convex methods offer significant advantages over convex methods due to their lower space complexity as well as faster convergence speed. Moreover, many of these methods feature rigorous approximation guarantees. Non-convex algorithms are simple to analyze and implement as they perform Euclidean gradient descent on matrix factors. In contrast, this paper derives non-Euclidean optimization frame- work in the non-convex setting that takes nonlinear gradient steps on the factors. We prove convergence rates to the global minimum under appropriate assumptions. We provide numerical evidence with Fourier Ptychography and FastText applications using real data that shows our approach can significantly enhance solution quality

Details

Actions

Preview