Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp013t945t816
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorHazan, Elad-
dc.contributor.authorHallman, John-
dc.date.accessioned2020-09-29T17:04:20Z-
dc.date.available2020-09-29T17:04:20Z-
dc.date.created2020-05-06-
dc.date.issued2020-09-29-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp013t945t816-
dc.description.abstractWe study the problem of non-stochastic online control in the bandit setting, where an agent iteratively observes the state of a dynamical system and selects control input signals with the objective of minimizing some cost over time. In particular, we consider linear dynamical systems with adversarial perturbations, where the only feedback available to the agent is the scalar cost at each time step, and the cost function itself is unknown. For this problem, with an either known or unknown system, we give an efficient sublinear regret algorithm. The main algorithmic difficulty is the dependence of the system on past choices of control signals, which means that one cannot directly apply regular convex optimization techniques to this setting. To overcome this issue, we propose an efficient algorithm for the general setting of bandit convex optimization for loss functions with memory, which may be of independent interest.-
dc.format.mimetypeapplication/pdf-
dc.language.isoen-
dc.titleNon-Stochastic Control with Bandit Feedback-
dc.typePrinceton University Senior Theses-
pu.date.classyear2020-
pu.departmentMathematics-
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid920090365-
pu.certificateApplications of Computing Program-
pu.certificateCenter for Statistics and Machine Learning-
Appears in Collections:Mathematics, 1934-2023

Files in This Item:
File Description SizeFormat 
HALLMAN-JOHN-THESIS.pdf2.55 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.