Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01xd07gw782
Title: Contextual Bias and Interpretability in Image Classification
Authors: Zhang, Sharon
Advisors: Russakovsky, Olga
Department: Mathematics
Certificate Program: Applications of Computing Program
Class Year: 2021
Abstract: Visual classification is a foundational computer vision problem that appears in image search, healthcare domains, online content tagging and more. In their paper "Don't Judge an Object By It's Context: Learning to Overcome Contextual Bias," Singh et al. point out the dangers of contextual bias in the visual recognition datasets that are commonly used to train classification models. They propose two methods, the CAM-based and feature-split models, to better recognize an object or attribute in the absence of its typical context. At the same time, they show that these methods maintain competitive within-context accuracy. To verify their performance, we attempt to reproduce all 12 tables in the original paper, including those in the appendix. We also conduct additional experiments to better understand the proposed methods, including increasing the regularization in the CAM-based model and removing the weighted loss in the feature-split model. This work was done as a part of the 2020 Machine Learning Reproducibility Challenge with the Princeton Visual AI Lab team. Inspired by the challenges of identifying and understanding contextual bias, we also investigate various interpretability methods for answering the following question: what does a classifier learn to see? Identifying the information that a classifier is leveraging to make a particular decision is an important sanity check towards deploying trustworthy and accurate models. Many existing methods to answer this question are able to show *where* a model is looking, but not necessarily *what* it is looking at. Using the expressive power of generative adversarial networks (GANs), we explore one possible approach to understanding this question. With facial attribute classifiers as our black box models, we visualize the facial features that contribute to positive and negative predictions by these classifiers. These visualizations also help us better understand existing biases within certain face datasets.
URI: http://arks.princeton.edu/ark:/88435/dsp01xd07gw782
Type of Material: Princeton University Senior Theses
Language: en
Appears in Collections:Mathematics, 1934-2023

Files in This Item:
File Description SizeFormat 
ZHANG-SHARON-THESIS.pdf2.71 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.