Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01sn00b2149
Title: | Scaling Full-Stack Safety for Learning-Enabled Robot Autonomy |
Authors: | Hsu, Kai-Chieh |
Advisors: | Fernández Fisac, Jaime |
Contributors: | Electrical and Computer Engineering Department |
Keywords: | Generative models Learning-based control Reinforcement learning Robust optimal control Safe autonomy |
Subjects: | Robotics Artificial intelligence |
Issue Date: | 2024 |
Publisher: | Princeton, NJ : Princeton University |
Abstract: | The rapid advancement of machine learning and computation tools has brought promises of deploying fully autonomous robots beyond controlled factory floors. Ensuring their safe operation across various environments, particularly in uncertain, unstructured, and unforgiving scenarios, is paramount. Traditional safety frameworks have conventionally focused primarily on the planning and control module within the autonomy stack. However, this subsystem approach can limit overall system performance, often imposing unnecessary information bottlenecks and compounded errors. Instead, the next generation of autonomous robots will need to examine safety in a unified manner combining ideas from perception and localization to learning and adaptation, to motion prediction, to planning and control. This thesis aims to lay down the foundations for ensuring the safety of learning-enabled autonomous systems in a way that scales to complex and unpredictable deployment conditions, without inducing undue conservativeness, by systematically unifying the full autonomy stack. We first introduce the overarching concept of a safety filter, a control module that dynamically monitors and intervenes in the operation of autonomous systems to prevent catastrophic failures. Second, we develop novel computation tools grounded in robust control and (adversarial) reinforcement learning for scalable safety fallback synthesis, which underpins the efficacy of safety filters. Third, we convert the results from offline synthesis to runtime safety filters, with a focus on safety guarantees for complex and uncertain environments under various degrees of prior knowledge and smooth task–safety integration. Fourth, we reduce the conservativeness of assuming fully adversarial realizations in human–robot interactive scenarios by infusing (counterfactual) safety reasoning into prediction–planning integration. We conclude the thesis by discussing future challenges and potential directions of safe human–AI systems, including assuringrobust safety for high-order systems, aug |
URI: | http://arks.princeton.edu/ark:/88435/dsp01sn00b2149 |
Type of Material: | Academic dissertations (Ph.D.) |
Language: | en |
Appears in Collections: | Electrical Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Hsu_princeton_0181D_14970.pdf | 43.25 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.