Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01zk51vm03s
Title: Learning Through Social Interactions and Learning to Socially Interact in Multi-Agent Learning
Authors: Madhushani, Udari
Advisors: Leonard, Naomi
Contributors: Mechanical and Aerospace Engineering Department
Keywords: Artificial Intelligence
Efficient Communication
Machine Learning
Multi-agent Learning
Reinforcement Learning
Social Dilemmas
Subjects: Computer science
Mechanical engineering
Computer engineering
Issue Date: 2023
Publisher: Princeton, NJ : Princeton University
Abstract: The rapid integration of AI agents into the society underscores the need for a deeper understanding of how these agents can benefit from social interactions and develop collective intelligence. Cultural evolution studies have emphasized the importance of cultural transmission of knowledge and intelligence across generations, highlighting that social interactions play a crucial role in a group's ability to solve complex problems or make optimal decisions. Humans are remarkable at learning through social interactions and we posses an innate ability to seamlessly perceive social interactions, acquire and transmit knowledge through social interactions, and transfer cognitive capabilities and knowledge through generations. A natural question is how can we embed these capabilities in AI agents? As a step towards answering this question this dissertation investigates two main research questions: (1) how AI agents can learn to effectively communicate with other agents, and (2) how AI agents can enhance their ability to generalize or adapt to novel partners/opponents through social interactions. The first section of this dissertation focuses on developing methodologies that facilitate effective communication among AI agents under various communication constraints. We specifically examine communication in sequential decision-making tasks within uncertain environments, where the primary challenge lies in balancing exploration and exploitation to achieve optimal performance. To tackle this challenge, we propose innovative methodologies that enable efficient communication and decision-making among agents, taking into account the intricacies of the problem domain, such as communication costs, different communication networks, and agent specific probabilistic communication constraints. Further, we investigate the role of agent heterogeneity in individual and group performance and develop methods that can leverage heterogeneity to improve performance. The second section delves into the topic of generalization in multi-agent AI. Our research investigates how agents can adapt their policies to collaborate with novel agents they have not previously encountered in tasks that necessitate coordination and cooperation among agents to achieve optimal outcomes. We introduce new techniques, that empower agents to learn and adapt their strategies to novel partners/opponents, fostering improved cooperation and coordination among AI agents. We investigate how heterogeneous social preferences of agents lead to behavioural diversity. Further, we investigate how learning a best response to diverse policies can lead to better generalization. In exploring these research areas, this dissertation aims to enrich our understanding of how AI agents can effectively collaborate in complex social scenarios, thereby contributing to the advancement of the Artificial Intelligence.
URI: http://arks.princeton.edu/ark:/88435/dsp01zk51vm03s
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Mechanical and Aerospace Engineering

Files in This Item:
File Description SizeFormat 
Madhushani_princeton_0181D_14630.pdf3.66 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.