Skip navigation
Please use this identifier to cite or link to this item:
Title: On Geometric Optimization, Learning and Control
Authors: Weber, Melanie
Advisors: Fefferman, Charles L.
Contributors: Applied and Computational Mathematics Department
Keywords: Control Theory
Differential Geometry
Learning Theory
Subjects: Applied mathematics
Computer science
Issue Date: 2021
Publisher: Princeton, NJ : Princeton University
Abstract: In this thesis, we consider two research questions in the context of optimization, learning, and control: Constrained optimization on manifolds (part I) and optimal control with learning “on the fly” (part II). In part I, we generalize the classical Frank-Wolfe algorithm from Euclidean space to Riemannian manifolds, introducing a new class of algorithms for constrained Riemannian optimization (Riemannian Frank-Wolfe methods, short RFW). This work is motivated by machine learning applications which involve non-Euclidean data, where exploiting Riemannian geometry can deliver algorithms that are computationally superior to standard nonlinear programming approaches. The proposed algorithm is projection-free, i.e, guaranteed to stay in the feasible region in each iteration. This circumvents the need to compute potentially costly projections required by projected-gradient approaches in the prior literature. At the heart of RFW lies a Riemannian “linear” oracle that determines the update conditioned on geodesically convex constraints. While in general a nonconvex semi-definite problem, we derive a closed-form solution on the manifold of symmetric positive definite matrices (PSD). RFW extends naturally to stochastic settings, where we discuss both purely stochastic and finite-sum problems. We show that RFW converges at a nonasymptotic sublinear rate, recovering the best-known guarantees for its Euclidean counterpart. Furthermore, we demonstrate that RFW can be efficiently implemented on the PSD manifold and discuss the computation of Riemannian centroids and Wasserstein barycenters, which are crucial subroutines in many machine learning methods. In comparison with state-of-the-art Riemannian optimization methods, RFW shows great promise in numerical experiments for both applications. In part II, we study control problems in which the agent has to learn an optimal control strategy with little data and little time available. We introduce several notions of regret and derive strategies for minimizing worst-case regret. Hereby, we are particularly interested in agnostic problems, where the parameters of the underlying dynamics are a priori unknown. We consider two dynamical models: Systems with an unknown drift and systems with unknown feedback. In the first case, we derive an optimal strategy that achieves constant regret. In the second case, we propose a strategy that is guaranteed to have bounded regret.
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog:
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Applied and Computational Mathematics

Files in This Item:
File Description SizeFormat 
Weber_princeton_0181D_13647.pdf3.77 MBAdobe PDFView/Download

Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.