Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp010z709066w
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorPillow, Jonathan W
dc.contributor.authorAshwood, Zoe
dc.contributor.otherComputer Science Department
dc.date.accessioned2022-10-10T19:51:49Z-
dc.date.available2022-10-10T19:51:49Z-
dc.date.created2022-01-01
dc.date.issued2022
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp010z709066w-
dc.description.abstractA central goal in neuroscience is to identify the strategies used by animals and humans as they make decisions, and to characterize the learning rules that shape these policies in the first place. In this thesis, I discuss three projects aimed at tackling this goal. In the first, we introduce a novel framework, the GLM-HMM, for characterizing mouse and human choice policies during binary decision-making tasks. The GLM-HMM is a hidden Markov model with Bernoulli Generalized Linear Model observations. By fitting the GLM-HMM to hundreds of thousands of decisions, our framework revealed that — contrary to common wisdom — mice and humans use multiple strategies over the course of a single session to perform perceptual decision-making tasks. In the second project, we sought to uncover the learning rules used by mice and rats as they learned to perform this type of task. Our model tracked trial-to-trial changes in the animals’ choice policies, and separated these changes into components explainable by a reinforcement learning rule, and components that remained unexplained. Whereas the average contribution of the conventional REINFORCE learning rule to the policy update for mice learning a common task was just 30%, we found that adding baseline parameters allowed the learning rule to explain 92% of the animals' policy updates under our model. Finally, I discuss our approach to applying inverse reinforcement learning (IRL) to the trajectories of mice exploring a maze environment. While IRL has been widely applied in robotics and healthcare settings to infer the unknown reward function of an agent, it has yet to be applied extensively in neuroscience. One potential reason for this is that existing IRL frameworks assume that an agent's reward function is fixed over time. To overcome this limitation, we introduce 'DIRL', an IRL framework that allows for time-varying intrinsic rewards. Our method returns interpretable reward functions for two separate cohorts of mice, and provides a novel characterization of exploratory behavior. Overall, we anticipate 'DIRL' having broad applicability in neuroscience, and that it could also facilitate the design of biologically-inspired reward functions for training artificial agents to perform analogous tasks.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu>catalog.princeton.edu</a>
dc.subjectComputational neuroscience
dc.subjectDecision-making
dc.subjectReinforcement learning
dc.subject.classificationComputer science
dc.subject.classificationNeurosciences
dc.titleProbabilistic Models for Characterizing Animal Learning and Decision-Making
dc.typeAcademic dissertations (Ph.D.)
pu.date.classyear2022
pu.departmentComputer Science
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Ashwood_princeton_0181D_14253.pdf40.38 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.