Skip navigation
Please use this identifier to cite or link to this item:
Title: Multi-view Representation Learning with Applications to Functional Neuroimaging Data
Authors: Chen, Po-Hsuan
Advisors: Ramadge, Peter J
Contributors: Electrical Engineering Department
Keywords: Artificial Intelligence
Computer Science
Machine Learning
Multi-view Learning
Subjects: Engineering
Computer science
Issue Date: 2017
Publisher: Princeton, NJ : Princeton University
Abstract: One of the greatest challenges for the 21st century is understanding how the human brain works. Although there are different levels of understanding of the human brain, a key step is knowing how brain activity patterns map onto cognition, emotion, memories, etc. This can be studied using functional magnetic resonance imaging (fMRI). fMRI is a non-invasive brain imaging technique with unprecedented spatiotemporal resolution. The fMRI data is gathered while subjects perform a wide-range of cognitive tasks. Analysis of fMRI data using multivariate statistics and machine learning has led to tremendous success in understanding how patterns of neural activity reflect mental representations. This thesis aims to continue the success through advancing machine learning methods motivated by applications to neuroscience problems. We develop a multi-view learning framework that estimates shared features from multi-view data. We analyze and demonstrate two primary approaches of how can a multi-view learning framework provide new ways of exploring neuroimaging data. First, a multi-view learning model forms a larger dataset by aggregating data from multiple views. A key potential advantage of this is an increase in statistical sensitivity. Second, a multi-view learning model learns a shared feature space and transformations between each view’s observation space and the shared feature space. These transformations bridge any two views, opening up new possibilities for analyzing the data. For example, by treating a subject as a view, we can transform one subject’s fMRI data into the space of another subject’s brain. By treating semantic vectors of stimulus text description and fMRI response as different views, it opens up the opportunity to generate text from fMRI responses or fMRI responses from text. Lastly, we explore various forms of multi-view learning models, including manifold learning, probabilistic modeling, deep neural network, etc. Different ways of applying multi-view models on neuroimaging data are demonstrated and analyzed. We also discuss our contribution to the open-source software community.
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog:
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
Chen_princeton_0181D_12239.pdf13.87 MBAdobe PDFView/Download

Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.