Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01q524jr918
Title: Understanding and Measuring Privacy Risks in Machine Learning
Authors: Song, Liwei
Advisors: Mittal, Prateek
Contributors: Electrical Engineering Department
Keywords: adversarial examples and defenses
backdoor attacks and defenses
machine learning
membership inference attacks
privacy risks
trustworthy machine learning
Subjects: Computer engineering
Electrical engineering
Computer science
Issue Date: 2021
Publisher: Princeton, NJ : Princeton University
Abstract: Machine learning models have achieved great success and been deployed prominently in many real-world applications such as face recognition, voice assistants, and recommendation systems. Central to all these machine learning systems is users’ data. However, the sensitive nature of individual users’ data has also raised privacy concerns. A recent thread of research has shown that a malicious adversary can infer private information of users’ data by querying target machine learning models. In this dissertation, we aim to thoroughly understand and measure privacy risks in machine learning, with a specific focus on membership inference attacks where the adversary guesses whether an input sample was used to train the model or not. We first provide a systematic evaluation of membership inference privacy risks by designing benchmark attack algorithms to estimate aggregate privacy risks, and proposing a fine-grained privacy analysis to estimate each individual sample’s privacy risk. Next, we explore the impact on privacy risks of robust training algorithms, which are designed to increase model robustness against input perturbations. We find that these robust training algorithms indeed make machine learning models more vulnerable to membership inference attacks, highlighting the importance of jointly thinking about privacy and robustness in machine learning. Then, we focus on machine unlearning, which is recently proposed to protect users’ data privacy by requiring the machine learning service provider to remove a user’s data from trained models upon its deletion requests. Based on backdooring machine learning models by editing some of training samples, we propose the first verification mechanism which enables individual users to verify whether the service provider follows the deletion request or not. Finally, we present how to leverage privacy attacks to quantify the practical privacy risks ofmachine learning models trained with theoretical upper bound guarantees on privacy leakage. In summary, this dissertation helps the research community to better understand privacy risks in machine learning and provide directions to design more private and trustworthy machine learning models.
URI: http://arks.princeton.edu/ark:/88435/dsp01q524jr918
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Electrical Engineering

Files in This Item:
File Description SizeFormat 
Song_princeton_0181D_13905.pdf20.6 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.