Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01x633f1146
Full metadata record
DC FieldValueLanguage
dc.contributorBlei, David-
dc.contributor.advisorNorman, Ken-
dc.contributor.authorSaparov, Abulhair-
dc.date.accessioned2013-07-26T19:31:33Z-
dc.date.available2013-07-26T19:31:33Z-
dc.date.created2013-05-06-
dc.date.issued2013-07-26-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01x633f1146-
dc.description.abstractBayesian probabilistic modeling is becoming an increasingly promising tool to use in neuroscience. Many brain imaging methods, such as electroenchephalography (EEG) or functional magnetic resonance imaging (fMRI) do not have very high spatial resolution, and so the brain dynamics at small scales must be studied using other methods. Probabilistic models enables the principled exploration of phenomena that are difficult to measure, such as the processes governing the storage and retrieval of words. Topographic latent source analysis (TLSA) is a novel, fully-Bayesian probabilistic model that describes brain activity as a covariate-dependent linear sum of latent sources of activity. Gibbs sampling was used originally to fit brain imaging data to TLSA, but as a sampling algorithm, its performance limited its scalability. Variational inference is a new approach to Bayesian inference that transforms the problem of inference into one of optimization, providing greatly improved performance. As a generalization of expectation maximization (EM), variational inference is guaranteed to converge. Unfortunately, mean-field variational inference can only be applied to a class of models that satisfy conditional conjugacy, and TLSA does not fall into this class. More recent work successfully extended variational inference to a much broader class of models by using Laplace approximations: the variational distributions of the non-conjugate hidden variables are assumed to be normal. In my work, I apply Laplace variational inference to TLSA, deriving the variational updates. I discuss the issue of convergence in detail, and the approaches that can be taken to avoid those problems. Finally, I derive a stochastic variational inference equivalent of the algorithm, which dramatically reduces the running time of the algorithm.en_US
dc.format.extent50 pagesen_US
dc.language.isoen_USen_US
dc.titleScalable Inference with Approximate Variational Inference on a Latent Source Model of the Brainen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2013en_US
pu.departmentComputer Scienceen_US
pu.pdf.coverpageSeniorThesisCoverPage-
dc.rights.accessRightsWalk-in Access. This thesis can only be viewed on computer terminals at the <a href=http://mudd.princeton.edu>Mudd Manuscript Library</a>.-
pu.mudd.walkinyes-
Appears in Collections:Computer Science, 1987-2023

Files in This Item:
File SizeFormat 
Abulhair Saparov.pdf1.21 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.