Skip navigation
Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorNiv, Yael
dc.contributor.authorSong, Mingyu
dc.contributor.otherNeuroscience Department
dc.description.abstractLearning in real life is never as simple as forming stimulus-response mappings. It involves identifying the current context (i.e., relevant information for task at hand), figuring out the transition between contexts, and learning about complex relationships and rules. In this dissertation, I study how animals and humans learn to discover such structures in decision tasks. I begin by demonstrating the importance of studying structure or representation learning. In Chapter 2, I show that rats do not form the optimal task representation in an odor-guided decision task, even after extensive training. This suggests that we cannot assume a task representation without testing it. It also raises the following questions: How is a task representation learned? What factors may affect such learning? In the rest of this dissertation, I use two tasks to study these questions with animals and humans. In Chapter 3, I propose a latent- cause inference model to explain fear extinction in rats. This model characterizes how animals make inference about the underlying causes that generate observations (e.g., shocks) and how the causes may change over time. It explains why gradually reducing shock frequency is more effective in extinguishing fear than the standard extinction procedure, by demonstrating how different procedures lead to the learning of distinct underlying task structures. In Chapter 4, I study how humans actively learn about multi-dimensional rules with probabilistic feedback. I show that people use both value-based and rule-based learning systems, and trade off them based on the instructed task complexity. This study sheds light on how humans make strategic use of cognitive resource when learning complex task structures. In Chapter 5, I propose a novel approach to study representation learning with recurrent neural networks (RNNs). I demonstrate that RNNs can be useful for developing better cognitive models and identifying cognitive differences across individuals. In the Conclusion, I summarize the findings from the above studies, and discuss common principles that underlie animal and human representation learning.
dc.publisherPrinceton, NJ : Princeton University
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=></a>
dc.titleLearning to Discover Structure in Animal and Human Decision Tasks
dc.typeAcademic dissertations (Ph.D.)
Appears in Collections:Neuroscience

Files in This Item:
File Description SizeFormat 
Song_princeton_0181D_13974.pdf2.43 MBAdobe PDFView/Download

Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.