Please use this identifier to cite or link to this item:
|Title:||Grounding Entities and Dynamics through Gameplay|
|Abstract:||In recent years, Deep Reinforcement Learning (DRL) has emerged as a state of the art approach to tasks ranging from games and robotics to natural language processing. However, DRL agents often need millions of samples to learn a task, and even for related tasks they need to be trained from scratch. Humans on the other hand, are able to leverage previously gained knowledge to learn new tasks efficiently. To improve the efficiency and robustness of DRL algorithms, there has been growing interest in exploiting domain knowledge found in text. One challenge of this approach is language grounding. We introduce two tasks, Defuse the Bomb and Item Drop and an attention-based text model that can simultaneously ground entities and dynamics. We find that our model is able to learn faster and achieve better generalization than baselines that do not use text. We also show that grounding game dynamics and entities occur in different parts of our model, and we leverage this fact to build chimera models that can perform well on previously unseen entity-task combinations.|
|Type of Material:||Princeton University Senior Theses|
|Appears in Collections:||Computer Science, 1988-2020|
Files in This Item:
|WANG-AUSTIN-THESIS.pdf||2.59 MB||Adobe PDF||Request a copy|
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.