Skip navigation
Please use this identifier to cite or link to this item:
Title: Using Interactive Neural-Symbolic Models to Test Hypotheses of Human Event Cognition
Authors: Lu, Qihong
Advisors: NormanHasson, KennethUri
Contributors: Psychology Department
Keywords: episodic memory
event cognition
naturalistic experiment
neural network
reinforcement learning
Subjects: Cognitive psychology
Computer science
Issue Date: 2023
Publisher: Princeton, NJ : Princeton University
Abstract: The last decade has seen a surge of naturalistic experiments in cognitive neuroscience, which require highly interactive processing across multiple cognitive functions. This thesis leverages recent advances in deep learning to present two interactive neural symbolic models that aim to shed light on the underlying cognitive mechanisms during event cognition.In the first project, we aim to explain the recent finding showing that humans are selective in when they encode and retrieve episodic memories. We trained a memory-augmented neural network to use its episodic memory to support prediction of upcoming states in an environment where past situations sometimes reoccur. We found that the network learned to retrieve selectively as a function of several factors, including its uncertainty about the upcoming state. Additionally, we found that selectively encoding episodic memories at the end of an event (but not mid-event) led to better subsequent prediction performance. In all of these cases, the benefits of selective retrieval and encoding can be explained in terms of reducing the risk of retrieving irrelevant memories. Overall, these modeling results provide a resource-rational account of why episodic retrieval and encoding should be selective and lead to several testable predictions. In the second project, we present the latent cause network (LCNet), a neural network model of latent cause inference (LCI), which aims to explain how humans spontaneously perceive a continuous stream of experience as discrete events. LCNet interacts with a Bayesian LCI mechanism that activates a unique context vector for each inferred latent cause (LC). LCNet can also recall episodic memories of previously inferred LCs to avoid performing LCI all the time. These mechanisms made LCNet more neurally plausible and efficient than existing models. Across three simulations, we found that LCNet can extract the shared structure across LCs while avoiding catastrophic interference, capture human data on curriculum effects on schema learning, and infer the underlying event structure and produce human-like event segmentation when processing naturalistic videos of daily activities. Our work provides a neurally plausible computational model that can operate in both laboratory and naturalistic settings, opening up the possibility of providing a unified model of event cognition.
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Psychology

Files in This Item:
File Description SizeFormat 
Lu_princeton_0181D_14491.pdf11.84 MBAdobe PDFView/Download

Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.