Please use this identifier to cite or link to this item:
|Title:||An Adversarial Fair Autoencoder for Debiased Representations of Data|
|Abstract:||As modern techniques have enabled the application of machine learning across many different industries, we have seen increased efficiency in predicting correct results. However, since machine learning models are typically trained using historical data, they are at risk of producing new biased outcomes or perpetuating existing prejudices, especially when most previous data contains discriminatory behavior. Potential issues range from discrimination in criminal justice, to credit scores, to hiring job candidates. While much of the past literature has focused on different methods of producing fair decisions, we create an autoencoder that can produce fair representations for any task. Specifically, we create a tool that allows us to recreate a debiased version of a dataset, which can be used for multiple downstream tasks. We then apply this tool to various datasets, including Princeton University course evaluations, to evaluate how removing a sensitive attribute affects different parts of a representation.|
|Type of Material:||Princeton University Senior Theses|
|Appears in Collections:||Computer Science, 1988-2020|
Files in This Item:
|HUANG-CHRISTINA-THESIS.pdf||1.22 MB||Adobe PDF||Request a copy|
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.