Please use this identifier to cite or link to this item:
|Title:||Integrating Deep Learning with the Theory of Nonlinear, Chaotic, and History-Dependent Dynamics|
|Abstract:||Deep learning approaches for modeling challenging dynamics are becoming more and more ubiquitous. However, as deep neural networks (DNNs) are not fully understood even in their own domain, interpretability of these models in dynamical settings can be a challenge. In this work, we first give a theoretical introduction to three common and challenging phenomena in differentiable dynamical systems: nonlinearity, chaos, and memory effects. After looking at these three challenges in theoretical detail, we develop from this theory two deep learning approaches for dynamical system modeling: the former closely resembling a recurrent neural network (RNN), and the latter closely resembling a Transformer. Through the theoretical development of these models, we formally analyze the role of each component in the overall learning problem, interpret meaningful information about the underlying dynamical system from the model, and design models that are robust to nonlinearity, chaos, and memory effects.|
|Type of Material:||Princeton University Senior Theses|
|Appears in Collections:||Mathematics, 1934-2021|
Files in This Item:
|PSENKA-MICHAEL-THESIS.pdf||2.14 MB||Adobe PDF||Request a copy|
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.