Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp019p290d641
Title: From Learning to Optimal Learning: Understanding the impact of overparameterization on features of neural networks to optimal learning of expensive, noisy functions using low-dimensional belief models
Authors: Duzgun, Ahmet Cagri
Advisors: Powell, Warren
Contributors: Operations Research and Financial Engineering Department
Keywords: Bayesian Optimization
Deep Learning
Sequential Decision Making
Subjects: Operations research
Issue Date: 2023
Publisher: Princeton, NJ : Princeton University
Abstract: This thesis investigates two distinct but important topics in machine learning and optimization.The first topic compares the features of overparameterized neural networks to smaller networks. Chapter 2 presents a methodology to compare the expressivity of features of overparamaterized networks to those of smaller networks. Using this methodology, it finds that smaller networks cannot fully capture the features of over- parameterized networks, and these features are responsible for the superior perfor- mance of overparameterized networks. The chapter also demonstrates through a toy problem that certain features can only be learned by overparameterized networks. The second topic of the thesis is focused on the optimization of costly black-box functions with limited evaluations. In Chapter 3, a new policy called the KGLQ policy is proposed, which approximates the true function locally using a quadratic function and incorporates structural bias by modeling it as heteroscedastic noise that is different from the measurement noise. This approach addresses issues that arise when a value of information policy is used in the presence of parametric models. The KGLQ policy performs competitively compared to existing policies for small budgets, as demonstrated through several test problems evaluated in the chapter. Chapter 4 introduces a global belief model that leverages the concept behind KGLQ. A hierar- chical belief model is developed to produce an approximation by taking into account various levels of estimation from the global belief model. Using this hierarchical model, the HKGLQ policy is developed and shown to be asymptotically convergent. Experiments on test problems provide insights into the performance of KGLQ under larger budgets compared to an asymptotically convergent policy.
URI: http://arks.princeton.edu/ark:/88435/dsp019p290d641
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Operations Research and Financial Engineering

Files in This Item:
File Description SizeFormat 
Duzgun_princeton_0181D_14697.pdf2.94 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.