Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp0144558h472
Title: | Learning under constraints: how human memory computations help deep neural networks generalize from sparse training inputs |
Authors: | Johns, Maxwell |
Advisors: | Cohen, Jonathan |
Department: | Neuroscience |
Class Year: | 2022 |
Abstract: | In this paper I investigate the biological mechanisms by which humans are capable of extracting similarities across and generalize information learned in specific contexts to information in new domains and novel contexts never experienced. Then, I explore some of the ways that new machine learning methods use principles of human memory computation to improve performance on tasks like image classification and natural language processing. Then, I predict how a recent model capable of meta-learning with attentional mechanisms and augmented memory networks would perform on a task that involves generalizing from training on real animals to cartoon representations of the same class of animals and explore the ways in which the network architecture falls short of human image recognition capabilities, charting a map for future progress. |
URI: | http://arks.princeton.edu/ark:/88435/dsp0144558h472 |
Type of Material: | Princeton University Senior Theses |
Language: | en |
Appears in Collections: | Neuroscience, 2017-2024 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
JOHNS-MAXWELL-THESIS.pdf | 2.73 MB | Adobe PDF | Request a copy |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.