Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp017p88ck75v
Title: Language Experience Changes Subsequent Learning for Neural Networks
Authors: Dalawella, Kavindya
Advisors: Lew-Williams, Casey
Department: Computer Science
Class Year: 2022
Abstract: Infants and adults have both displayed an ability to learn novel languages solely from statistical information in example streams of speech. One explanation for this is that humans learn language by tracking the transitional probabilities (TPs) between units like syllables and words. These TPs can be forward or backward directed, with each direction being more informative than the other, based on the language they comprise. Adult participants have shown biases for one direction of TPs over the other in linguistic tasks, presumably due to their first language experience. Unclear, however, is what aspects of experienced speech propel this bias to develop. As an alternative to human participants, this project tests whether a similar bias can be induced in neural network models. Two models with identical architectures were trained on a prediction task and a cloze test-inspired task for two languages: a first language that varied between the models, and a second language that was the same. The languages were designed so that the first language either favored backward TP-based learning, or forward TP-based learning, while the second language did not favor either. The models were found to display different learning patterns based on their first language exposure, but not a clear bias for one type of TP over the other.
URI: http://arks.princeton.edu/ark:/88435/dsp017p88ck75v
Type of Material: Princeton University Senior Theses
Language: en
Appears in Collections:Computer Science, 1987-2024

Files in This Item:
File SizeFormat 
DALAWELLA-KAVINDYA-THESIS.pdf374.1 kBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.