Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp019880vv12w
 Title: Learned surrogates and stochastic gradients for accelerating numerical modeling, simulation, and design Authors: Beatson, Alex Advisors: Adams, Ryan P Contributors: Computer Science Department Subjects: Artificial intelligence Issue Date: 2021 Publisher: Princeton, NJ : Princeton University Abstract: Numerical methods, such as discretization-based methods for solving ODEs and PDEs, allow us to model and design complex devices, structures, and systems. However, this is often very costly in terms of both computation and the time of the expert who must specify the physical governing equations, the discretization, the solver, and all other aspects of the numerical model. This thesis presents work using deep learning and stochastic gradient estimation to speed up numerical modeling procedures. In the first chapter we provide a broad introduction to numerical modeling, discuss the motivation for using machine learning (and other approximate methods) to speed it up, and discuss a few of the many methods which have been developed to do so. In chapter 2 we present composable energy surrogates, in which neural surrogates are trained to model a potential energy in sub-components or sub-domains of a PDE, and then composed together to solve a larger system by minimizing the sum of potentials across components. This allows surrogate modeling without requiring the full system to be solved with an expensive ground-truth finite element solver to generate training data. Instead, training data are generated cheaply by performing finite element analysis with individual components. We show that these surrogates can accelerate simulation of parametric meta-materials and produce accurate macroscopic behavior when composed. In chapter 3 we discuss randomized telescoping gradient estimators, which provide unbiased gradient estimators for objectives which are the limit of a sequence of increasingly accurate, increasingly costly approximations -- as we often encounter in numerical modeling. These estimators represent the limit as a telescoping sum and sample linear combinations of terms to provide cheap unbiased estimates. We discuss conditions which permit finite variance and computation, optimality of certain estimators within this class, and application to problems in numerical modeling and machine learning. In chapter 4 we discuss meta-learned implicit PDE solvers, which allow a new API for surrogate modeling. These models condition on a functional representation of a PDE and its domain by directly taking as input the PDE constraint and a method which returns samples in the domain and on the boundary. This avoids having to fix a parametric representation for PDEs within the class for which we wish to fit a surrogate, and allows fitting surrogate models for PDEs with arbitrarily varying geometry and governing equations. In aggregate, the work in this thesis aims to take machine learning in numerical modeling beyond simple regression-based surrogate modeling, and instead tailor machine learning methods to exploit and dovetail with the computational and physical structure of numerical models. This allows methods which are more computationally and data-efficient, and which have less-restrictive APIs, which might better empower scientists and engineers. URI: http://arks.princeton.edu/ark:/88435/dsp019880vv12w Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu Type of Material: Academic dissertations (Ph.D.) Language: en Appears in Collections: Computer Science

Files in This Item:
File Description SizeFormat
Beatson_princeton_0181D_13773.pdf17.73 MBAdobe PDF

Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.