Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01tt44pr01m
Title: The Nonstochastic Control Problem
Authors: Singh, Karan
Advisors: Hazan, Elad
Contributors: Computer Science Department
Keywords: control theory
dynamical systems
nonstochastic control
online learning
regret minimization
reinforcement learning
Subjects: Computer science
Statistics
Robotics
Issue Date: 2022
Publisher: Princeton, NJ : Princeton University
Abstract: Controlling a dynamical system is a fundamental problem, with wide-reaching applications in autonomous systems, engineering, and sciences. This thesis outlines algorithms and their analyses for feedback-driven learning that make progress on several fronts: robustness, adaptivity, generality, and sample efficiency. The focal point of this thesis is the introduction of the nonstochastic control framework, which builds an algorithmic, against the traditionally analytic, foundation for control theory. This segment provides robust algorithmic primitives for basic control subroutines. Concretely, it presents:A. An efficient algorithm for controlling linear systems in the presence of nonstochastic perturbations with an instance-optimal regret guarantee. Beyond the realm of both robust and stochastic control, such a data-driven notion of optimality offers worst-case guarantees with a promise of exceptional performance on benign problem instances; B. The first logarithmic regret result for the classical tracking problem, using optimization-based tools that go beyond dynamic programming based approaches; C. Extensions of the above to the case of unknown system dynamics. The second part of the thesis addresses the challenge of learning to act in non-stationary environments given a handful of trials, a version of the sim-to-real challenge. It provides a provable algorithm that negotiates this challenge using the algorithmic foundation delineated above, and demonstrates its efficacy over known approaches in non-linear settings. The final segment concerns the generalization of the notion of feedback in interactive learning beyond scalar rewards. It provides an efficient reductions-based approach to tackling concave reward functionals that may increasingly be found in underlying exploration subroutines. This is accomplished via a novel generalization of classical boosting and gradient boosting techniques to the online convex optimization setting.
URI: http://arks.princeton.edu/ark:/88435/dsp01tt44pr01m
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Singh_princeton_0181D_13942.pdf1.18 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.