Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp011n79h765p
Title: A novel probabilistic machine learning method for uncovering internal states in mice learning a decision-making task
Authors: Cuturela, Lenca
Advisors: Pillow, Jonathan
Department: Mathematics
Class Year: 2024
Abstract: Perceptual decision-making tasks have been instrumental in examining how sensory cues inform the choice of action. Recent work has shown that animals performing visual decision-making tasks alternate frequently between different internal states or behavioral strategies (Ashwood et al., 2022; Bolkan et al., 2022). However, it remains unknown how these states emerge over the course of learning. Does an animal alternate between multiple strategies from the very onset of training or do these states emerge only after extensive exposure to the task? Here, we address this problem by introducing a novel dynamic latent state model, which we call “dynamic GLM-HMM”. This model extends previous work that identified internal states in animals using a Hidden Markov Model (HMM) with state-dependent Bernoulli generalized linear model (GLM) observations (Ashwood et al., 2022; Bolkan et al., 2022). Since the standard GLM-HMM parameters are static, the existing framework was inapplicable to a wide range of non-stationary phenomena. To overcome this limitation, we added a dynamic prior that allows both the HMM transition probabilities and the state-dependent GLM weights to evolve over sessions. This extension is critical for capturing non-stationary phenomena like learning, which is characterized as an increase over time in the weights corresponding to the stimulus (Roy et al., 2021). After validating our approach on simulated data, we applied it to animal training data from the visual decision-making task from the International Brain Lab (IBL) (Laboratory et al., 2021). We found that mice switch between three states that can last for tens to hundreds of trials: an “engaged” state, in which task performance is high, and two “biased” states, in which performance is lower. Remarkably, we show that these states are present and identifiable even in the early training periods. During learning, animals improve their accuracy on the task through a combination of two changes: the stimulus weights grow larger, especially in the engaged state, and the animals spend increasingly more time in the engaged state. Thus, we offer a novel method for characterizing the temporal evolution of multiple strategies over the course of learning.
URI: http://arks.princeton.edu/ark:/88435/dsp011n79h765p
Type of Material: Princeton University Senior Theses
Language: en
Appears in Collections:Mathematics, 1934-2024

Files in This Item:
File SizeFormat 
CUTURELA-LENCA-THESIS.pdf3.74 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.