Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01ms35tc92q
Title: What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning
Authors: Pan, Jane
Advisors: Chen, Danqi
Department: Computer Science
Class Year: 2023
Publisher: Princeton, NJ : Princeton University
Abstract: Large language models (LLMs) exploit in-context learning (ICL) to solve tasks with only a few demonstrations, but its mechanisms are not yet well-understood. Some works suggest that LLMs only recall already learned concepts from pre-training, while others hint that ICL performs implicit learning over demonstrations. We characterize two ways through which ICL leverages demonstrations: task recognition (TR) captures the extent to which LLMs can recognize a task through demonstrations – even without ground-truth labels – and apply their pre-trained priors; task learning (TL) is the ability to capture new input-label mappings unseen in pre-training. Using a wide range of classification datasets and three LLM families (GPT-3, LLaMA, and OPT), we design controlled experiments to disentangle the roles of TR and TL in ICL. We show that (1) models can achieve non-trivial performance with only TR, and TR does not further improve with larger models or more demonstrations; (2) LLMs acquire TL as the model size scales; and (3) the correlation between model sizes and TL performance strengthens with the number of demonstrations. Our findings unravel two different forces behind ICL and we advocate for discriminating them in future ICL research due to their distinct nature.
URI: http://arks.princeton.edu/ark:/88435/dsp01ms35tc92q
Type of Material: Academic dissertations (M.S.E.)
Language: en
Appears in Collections:Computer Science, 2023

Files in This Item:
File Description SizeFormat 
Pan_princeton_0181G_14573.pdf644.02 kBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.