Please use this identifier to cite or link to this item:
|Title:||Variational Autoencoders: Finding a Foundation for ML Advances in Density Estimation|
|Abstract:||This thesis will provide a survey of both the existing theoretical framework for Variational Autoencoders and possible expansions of this framework to explore limits of generative modeling and dimensionality reduction. We begin on a specific level, with the statistical techniques involved in Variational Inference, paying particular attention to gradient estimation. By analyzing the variance of gradients that are approximated in VAEs, we hope not only to explain the promising results shown by VAEs, but also to pave the way for more efficient algorithms and applications of VAEs that make full use of their potential efficiency. After considering theoretical variance estimates, we ask the question of whether or not gradient descent over all parameters of a VAE leads to a decoder whose posterior can be represented exactly by the encoder, which is the assumption used to obtain variance estimates. We also pay close attention to the limits of Variational Autoencoders as a way of representing probability distributions in higher dimensional space, using analysis and differential geometry in order to explore the relationship between the ability of VAEs to model data and the manifold hypothesis.|
|Type of Material:||Princeton University Senior Theses|
|Appears in Collections:||Mathematics, 1934-2020|
Files in This Item:
|LAWTON-BEN-THESIS.pdf||322.6 kB||Adobe PDF||Request a copy|
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.