Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01r781wk41d
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorJha, Niraj K
dc.contributor.authorHuang, Linhui
dc.contributor.otherComputer Science Department
dc.date.accessioned2024-08-08T18:13:13Z-
dc.date.created2024-01-01
dc.date.issued2024
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01r781wk41d-
dc.description.abstractDeep neural networks exhibit remarkable performance, yet their black-box nature limits their utility in fields like healthcare where interpretability is crucial. Existing explainability approaches often sacrifice accuracy and lack quantifiable prediction uncertainty. In this study, we introduce Conformal Prediction for Interpretable Neural Networks (CONFINE), a versatile framework that generates prediction sets with statistically robust uncertainty estimates instead of point predictions, to enhance model transparency and reliability. CONFINE not only provides example-based explanations and confidence estimates for individual predictions but also boosts accuracy by up to 3.6%. We define a new metric, correct efficiency, to evaluate the fraction of prediction sets that contain precisely the correct label and show that CONFINE achieves correct efficiency of up to 3.3% higher than the original accuracy, matching or exceeding prior methods. CONFINE's marginal and class-conditional coverages attest to its validity across tasks spanning image classification to language understanding. Being adaptable to any pre-trained classifier, CONFINE marks a significant advance towards transparent and trustworthy deep learning applications in critical domains.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.subjectConformal Prediction
dc.subjectExplainable AI
dc.subjectInterpretability
dc.subjectMachine Learning
dc.subjectNeural Networks
dc.subjectUncertainty Estimation
dc.subject.classificationComputer science
dc.titleCONFINE: Conformal Prediction for Interpretable Neural Networks
dc.typeAcademic dissertations (M.S.E.)
pu.embargo.lift2025-06-06-
pu.embargo.terms2025-06-06
pu.date.classyear2024
pu.departmentComputer Science
Appears in Collections:Computer Science, 2023

Files in This Item:
This content is embargoed until 2025-06-06. For questions about theses and dissertations, please contact the Mudd Manuscript Library. For questions about research datasets, as well as other inquiries, please contact the DataSpace curators.


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.