Skip navigation
Please use this identifier to cite or link to this item:
Title: Defending Against Adversarial Attacks with Camera Image Pipelines
Authors: Zhang, Yuxuan
Advisors: Heide, Felix FH
Department: Computer Science
Class Year: 2023
Publisher: Princeton, NJ : Princeton University
Abstract: Existing neural networks for computer vision tasks are vulnerable to adversarial attacks: adding imperceptible perturbations to the input images can fool these models into making a false prediction on an image that was correctly predicted without the perturbation. Various defense methods have proposed image-to-image mapping methods, either including these perturbations in the training process or removing them in a preprocessing step. In doing so, existing methods often ignore that the natural RGB images in today’s datasets are not captured but, in fact, recovered from RAW color filter array captures that are subject to various degradations in the capture. In this work, we exploit this RAW data distribution as an empirical prior for adversarial defense. Specifically, we propose a model-agnostic adversarial defensive method, which maps the input RGB images to Bayer RAW space and back to output RGB using a learned camera image signal processing (ISP) pipeline to eliminate potential adversarial patterns. The proposed method acts as an off-the-shelf preprocessing module and, unlike model-specific adversarial training methods, does not require adversarial images to train. As a result, the method generalizes to unseen tasks without additional retraining. Experiments on large-scale datasets, e.g., ImageNet, COCO, for different vision tasks, e.g., classification, semantic segmentation, object detection, validate that the method significantly outperforms existing methods across task domains.
Language: en
Appears in Collections:Computer Science, 2023

Files in This Item:
File Description SizeFormat 
Zhang_princeton_0181G_14388.pdf17.47 MBAdobe PDFView/Download

Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.