Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01b5644v80h
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorRussakovsky, Olga
dc.contributor.authorRamaswamy, Vikram V
dc.contributor.otherComputer Science Department
dc.date.accessioned2023-07-06T20:24:01Z-
dc.date.available2023-07-06T20:24:01Z-
dc.date.created2023-01-01
dc.date.issued2023
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01b5644v80h-
dc.description.abstractOver the past decade the rapid increase in the ability of computer vision models has led to their applications in a variety of real-world applications from self-driving cars to medical diagnoses. However, there is increasing concern about the fairness and transparency of these models. In this thesis, we tackle these issue of bias within these models along two different axes. First, we consider the datasets that these models are trained on. We use two different methods to create a more balanced training dataset. First, we create a synthetic balanced dataset by sampling strategically from the latent space of a generative network. Next, we explore the potential of creating a dataset through a method other than scraping the internet: we solicit images from workers around the world, creating a dataset that is balanced across different geographical regions. Both techniques are shown to help create models with less bias. Second, we consider methods to improve interpretability of these models, which can then reveal potential biases within the model. We investigate a class of interpretability methods called concept-based methods that output explanations for models in terms of human understandable semantic concepts. We demonstrate the need for more careful development of the datasets used to learn the explanation as well as the concepts used within these explanations. We construct a new method that allows for users to select a trade-off between the understandability and faithfulness of the explanation. Finally, we discuss how methods that completely explain a model can be developed, and provide heuristics for the same.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.subjectConcept-based explanations
dc.subjectFairness in ML systems
dc.subjectInterpretability of ML systems
dc.subject.classificationComputer science
dc.titleTackling Bias within Computer Vision Models
dc.typeAcademic dissertations (Ph.D.)
pu.date.classyear2023
pu.departmentComputer Science
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Ramaswamy_princeton_0181D_14559.pdf44.57 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.