Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01zg64tp921
Title: Non-Stochastic Control with Bandit Feedback
Authors: Hallman, John
Advisors: Hazan, Elad
Department: Mathematics
Class Year: 2020
Abstract: We study the problem of non-stochastic online control in the bandit setting, where an agent iteratively observes the state of a dynamical system and selects control input signals with the objective of minimizing some cost over time. In particular, we consider linear dynamical systems with adversarial perturbations, where the only feedback available to the agent is the scalar cost at each time step, and the cost function itself is unknown. For this problem, with an either known or unknown system, we give an efficient sublinear regret algorithm. The main algorithmic difficulty is the dependence of the system on past choices of control signals, which means that one cannot directly apply regular convex optimization techniques to this setting. To overcome this issue, we propose an efficient algorithm for the general setting of bandit convex optimization for loss functions with memory, which may be of independent interest.
Type of Material: Princeton University Senior Theses
Language: en
Appears in Collections:Mathematics, 1934-2020
Aeronautical Engineering, 1945-1975

Files in This Item:
File Description SizeFormat 
HALLMAN-JOHN-THESIS.pdf2.55 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.