Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01wh246w37v
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorHeide, Felix FH
dc.contributor.authorZhang, Yuxuan-
dc.contributor.otherComputer Science Department
dc.date.accessioned2023-03-07T20:59:07Z-
dc.date.available2023-03-07T20:59:07Z-
dc.date.created2022-01-01
dc.date.issued2023-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01wh246w37v-
dc.description.abstractExisting neural networks for computer vision tasks are vulnerable to adversarial attacks: adding imperceptible perturbations to the input images can fool these models into making a false prediction on an image that was correctly predicted without the perturbation. Various defense methods have proposed image-to-image mapping methods, either including these perturbations in the training process or removing them in a preprocessing step. In doing so, existing methods often ignore that the natural RGB images in today’s datasets are not captured but, in fact, recovered from RAW color filter array captures that are subject to various degradations in the capture. In this work, we exploit this RAW data distribution as an empirical prior for adversarial defense. Specifically, we propose a model-agnostic adversarial defensive method, which maps the input RGB images to Bayer RAW space and back to output RGB using a learned camera image signal processing (ISP) pipeline to eliminate potential adversarial patterns. The proposed method acts as an off-the-shelf preprocessing module and, unlike model-specific adversarial training methods, does not require adversarial images to train. As a result, the method generalizes to unseen tasks without additional retraining. Experiments on large-scale datasets, e.g., ImageNet, COCO, for different vision tasks, e.g., classification, semantic segmentation, object detection, validate that the method significantly outperforms existing methods across task domains.
dc.format.mimetypeapplication/pdf
dc.language.isoenen_US
dc.publisherPrinceton, NJ : Princeton Universityen_US
dc.subjectAdversarial Defense
dc.subjectComputer Vision
dc.subjectMachine Learning
dc.subjectNeural Networks
dc.subject.classificationComputer science
dc.titleDefending Against Adversarial Attacks with Camera Image Pipelinesen_US
pu.date.classyear2023
pu.departmentComputer Science
Appears in Collections:Computer Science, 2023

Files in This Item:
File Description SizeFormat 
Zhang_princeton_0181G_14388.pdf17.47 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.