Skip navigation
Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorE, Weinan-
dc.contributor.advisorAbbe, Emmanuel-
dc.contributor.authorLawton, Ben-
dc.description.abstractThis thesis will provide a survey of both the existing theoretical framework for Variational Autoencoders and possible expansions of this framework to explore limits of generative modeling and dimensionality reduction. We begin on a specific level, with the statistical techniques involved in Variational Inference, paying particular attention to gradient estimation. By analyzing the variance of gradients that are approximated in VAEs, we hope not only to explain the promising results shown by VAEs, but also to pave the way for more efficient algorithms and applications of VAEs that make full use of their potential efficiency. After considering theoretical variance estimates, we ask the question of whether or not gradient descent over all parameters of a VAE leads to a decoder whose posterior can be represented exactly by the encoder, which is the assumption used to obtain variance estimates. We also pay close attention to the limits of Variational Autoencoders as a way of representing probability distributions in higher dimensional space, using analysis and differential geometry in order to explore the relationship between the ability of VAEs to model data and the manifold hypothesis.en_US
dc.titleVariational Autoencoders: Finding a Foundation for ML Advances in Density Estimationen_US
dc.typePrinceton University Senior Theses-
Appears in Collections:Mathematics, 1934-2020

Files in This Item:
File SizeFormat 
LAWTON-BEN-THESIS.pdf322.6 kBAdobe PDF    Request a copy

Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.