Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp011r66j387r
DC FieldValueLanguage
dc.contributor.authorLawton, Ben-
dc.date.accessioned2018-08-17T18:19:15Z-
dc.date.available2018-08-17T18:19:15Z-
dc.date.created2018-05-
dc.date.issued2018-08-17-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp011r66j387r-
dc.description.abstractThis thesis will provide a survey of both the existing theoretical framework for Variational Autoencoders and possible expansions of this framework to explore limits of generative modeling and dimensionality reduction. We begin on a specific level, with the statistical techniques involved in Variational Inference, paying particular attention to gradient estimation. By analyzing the variance of gradients that are approximated in VAEs, we hope not only to explain the promising results shown by VAEs, but also to pave the way for more efficient algorithms and applications of VAEs that make full use of their potential efficiency. After considering theoretical variance estimates, we ask the question of whether or not gradient descent over all parameters of a VAE leads to a decoder whose posterior can be represented exactly by the encoder, which is the assumption used to obtain variance estimates. We also pay close attention to the limits of Variational Autoencoders as a way of representing probability distributions in higher dimensional space, using analysis and differential geometry in order to explore the relationship between the ability of VAEs to model data and the manifold hypothesis.en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleVariational Autoencoders: Finding a Foundation for ML Advances in Density Estimationen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2018en_US
pu.departmentMathematicsen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid960956478-
Appears in Collections:Mathematics, 1934-2020

Files in This Item:
File SizeFormat