Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01th83m199d
Title: Non-convex Optimization for Machine Learning: Design, Analysis, and Understanding
Authors: Ma, Tengyu
Advisors: Arora, Sanjeev
Contributors: Computer Science Department
Keywords: machine learning
non-convex optimization
Subjects: Artificial intelligence
Computer science
Issue Date: 2017
Publisher: Princeton, NJ : Princeton University
Abstract: Non-convex optimization is ubiquitous in modern machine learning: recent breakthroughs in deep learning require optimizing non-convex training objective functions; problems that admit accurate convex relaxation can often be solved more efficiently with non-convex formulations. However, the theoretical understanding of non-convex optimization remained rather limited. Can we extend the algorithmic frontier by efficiently optimizing a family of interesting non-convex functions? Can we successfully apply non-convex optimization to machine learning problems with provable guarantees? How do we interpret the complicated models in machine learning that demand non-convex optimizers? Towards addressing these questions, in this thesis, we theoretically studied various machine learning models including sparse coding, topic models, and matrix completion, linear dynamical systems, and word embeddings. We first consider how to find a coarse solution to serve as a good starting point for local improvement algorithms such as stochastic gradient descent. We propose efficient methods for sparse coding and topic inference with better provable guarantees. Second, we propose a framework for analyzing local improvement algorithms that start from a course solution. We apply it successfully to the sparse coding problem. Then, we consider a family of non-convex functions satisfying that all local minima are also global (and some additional regularity property). Such functions can be optimized by local improvement algorithms efficiently from a random or arbitrary starting point. The challenge that we address here, in turn, becomes proving that an objective function belongs to this class. We establish such results for the natural learning objectives of matrix completion and linear dynamical systems. Finally, we make steps towards interpreting the non-linear models that require non-convex training algorithms. We reflect on the principles of word embeddings in natural language processing. We give a generative model for the texts, using which we explain why different non-convex formulations such as word2vec and GloVe can learn similar word embeddings with the surprising performance --- analogous words have embeddings with similar differences.
URI: http://arks.princeton.edu/ark:/88435/dsp01th83m199d
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Ma_princeton_0181D_12361.pdf2.45 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.