Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01pr76f608w
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorBlei, David M-
dc.contributor.authorRanganath, Rajesh-
dc.contributor.otherComputer Science Department-
dc.date.accessioned2017-12-12T19:17:23Z-
dc.date.available2018-02-03T09:06:07Z-
dc.date.issued2017-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01pr76f608w-
dc.description.abstractProbabilistic generative models are robust to noise, uncover unseen patterns, and make predictions about the future. These models have been used successfully to solve problems in neuroscience, astrophysics, genetics, and medicine. The main computational challenge is computing the hidden structure given the data---posterior inference. For most models of interest, computing the posterior distribution requires approximations like variational inference. Variational inference transforms posterior inference into optimization. Classically, this optimization problem was feasible to deploy in only a small fraction of models. This thesis develops black box variational inference. Black box variational inference is a variational inference algorithm that is easy to deploy on a broad class of models and has already found use in models for neuroscience and health care. It makes new kinds of models possible, ones that were too unruly for previous inference methods. One set of models we develop is deep exponential families. Deep exponential families uncover new kinds of hidden pattens while being predictive of future data. Many existing models are deep exponential families. Black box variational inference makes it possible for us to quickly study a broad range of deep exponential families with minimal added effort for each new type of deep exponential family. The ideas around black box variational inference also facilitate new kinds of variational methods. First, we develop hierarchical variational models. Hierarchical variational models improve the approximation quality of variational inference by building higher-fidelity approximations from coarser ones. We show that they help with inference in deep exponential families. Second, we introduce operator variational inference. Operator variational inference delves into the possible distance measures that can be used for the variational optimization problem. We show that this formulation categorizes various variational inference methods and enables variational approximations without tractable densities. By developing black box variational inference, we have opened doors to new models, better posterior approximations, and new varieties of variational inference algorithms.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a>-
dc.subjectBayesian Statistics-
dc.subjectMachine Learning-
dc.subjectVariational Inference-
dc.subject.classificationComputer science-
dc.subject.classificationStatistics-
dc.titleBlack Box Variational Inference: Scalable, Generic Bayesian Computation and its Applications-
dc.typeAcademic dissertations (Ph.D.)-
pu.projectgrantnumber690-2143-
pu.embargo.terms2018-02-03-
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Ranganath_princeton_0181D_12362.pdf3.77 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.