Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01s7526g78j
Title: Towards Meta-meta-learning
Authors: Sethapun, Waree
Advisors: Griffiths, Tom
Department: Computer Science
Class Year: 2024
Abstract: The ability to generalize prior knowledge and experiences to a novel problem is a hallmark of human intelligence. This “learning how to learn”, known as meta-learning in machine learning, could help make machine learning models have similar generalization capabilities as humans while using much less data. Results from cognitive science suggest that humans model the world and conduct inference via Hierarchical Bayesian models (HBM), which enables human meta-learning. We aim to model how humans do meta-learning in machine learning models by distilling the inductive biases of an HBM into neural networks. We do this by training neural networks using a popular meta-learning algorithm called Model-Agnostic Meta-Learning (MAML) using data sampled from an HBM. We find that some neural networks were able to obtain priors similar to an HBM and generalize on a task an HBM is capable of. However, the neural networks do not behave like an HBM overall. Thus, we think that meta-learning as we have implemented is insufficient for modeling human meta-learning, and we may need meta-meta-learning, for which we provide an algorithm.
URI: http://arks.princeton.edu/ark:/88435/dsp01s7526g78j
Type of Material: Princeton University Senior Theses
Language: en
Appears in Collections:Computer Science, 1987-2024

Files in This Item:
File Description SizeFormat 
SETHAPUN-WAREE-THESIS.pdf1.07 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.