Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01n296x233f
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorRussakovsky, Olga-
dc.contributor.authorMeister, Nicole-
dc.date.accessioned2022-08-12T14:48:59Z-
dc.date.available2022-08-12T14:48:59Z-
dc.date.created2022-04-18-
dc.date.issued2022-08-12-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01n296x233f-
dc.description.abstractAs machine learning and computer vision are increasingly applied to high-impact, high-risk domains, there have been numerous new methods aimed at making AI models more human interpretable and fair. This two part thesis explores these two areas within computer vision, namely interpretability and fairness. The first part proposes a novel human evaluation framework HIVE (Human Interpretability of Visual Explanations) for evaluating diverse interpretability methods; to the best of our knowledge, this is the first work of its kind. We conduct an IRB-approved human study evaluating ProtoPNet, a computer vision model that outputs explanations for its decision. Based on the results of this human study, we find that participants struggle to distinguish between correct and incorrect explanations and the ProtoPNet model’s concept of similarity does not align with human judgements of similarity. Furthermore, the results suggest that explanations (regardless of if they are actually correct) engender human trust, yet are not distinct enough for users to distinguish between correct and incorrect predictions. Our work underscores the need for falsifiable explanations and an evaluation framework that evaluates the desiderata of interpretable methods fairly. We hope our framework for human evaluation helps shift the field’s objective from focusing on method development to also prioritizing the development of high-quality evaluation metrics. The second part of this thesis centers on learning about how gender bias manifests in computer vision models by investigating what gender cues exist within large-scale visual datasets, where a gender cue is defined as any information in the image which is relevant (i.e., is learnable by a modern image classification model), and actionable (i.e., has an interpretable human corollary). Through our analyses, we find that gender cues are ubiquitous in COCO and OpenImages, occurring everywhere from low-level information (e.g., the mean value of the color channels) to the higher-level composition of the image (e.g., pose and location of people).en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleTowards Interpretable and Fair Computer Visionen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2022en_US
pu.departmentElectrical and Computer Engineeringen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid920209595-
pu.certificateRobotics & Intelligent Systems Programen_US
pu.mudd.walkinNoen_US
Appears in Collections:Electrical and Computer Engineering, 1932-2023
Robotics and Intelligent Systems Program

Files in This Item:
File Description SizeFormat 
MEISTER-NICOLE-THESIS.pdf6.81 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.