Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp013197xq182
Title: | Towards Robust Models in Deep Learning: Regularizing Neural Networks and Generative Models |
Authors: | Bao, Ruying |
Advisors: | E, Weinan |
Contributors: | Applied and Computational Mathematics Department |
Keywords: | Adversarial Defense Methods Deep Learning Entropy-based objective functions Generative Model Reconstructive Defense Methods Regularizing Neural Networks |
Subjects: | Applied mathematics Computer science |
Issue Date: | 2021 |
Publisher: | Princeton, NJ : Princeton University |
Abstract: | Deep neural networks are widely used in signal processing from a broad range of areas due to their good performances, including computer vision, natural language processing, automatic driving, and so on. However, people notice that neural networks are easily fooled by adversarial attacks and very sensitive to certain data-related scenarios, such as imbalanced classes and outliers. In this thesis, we focus on enhancing model robustness of deep neural networks from different data distributions.In the first part, we focus on datasets whose distributions are biased naturally, from data collection or the nature of data. We define novel information-entropy-based classification loss functions (entropy weight and entropy noise) to distinguish the difficulty of each sample prediction by either weighting or introducing stochastic noise on top of the cross entropy loss. To evaluate the effectiveness of each loss function, we test the new loss functions on crafted noisy and imbalanced datasets based on MNIST. To illustrate their effectiveness in real scenarios, we show improvements on tasks including computer vision and natural language understanding, compared to the corresponding state of the art (SOTA) models. The results show that models trained with entropy-based loss functions surpass the SOTA models. Deep neural networks have also been demonstrated to be vulnerable to adversarial attacks, where small perturbations intentionally added to the original inputs can fool the classifier. In the second part, we propose Path-Norm regularization to improve robustness of neural networks against adversarial attacks in various Lp norms. By adding Path-Norm regularization, models achieve comparable performance as the SOTA defense methods, and outperform SOTA methods when attacks and training samples are from different Lp spaces. We also introduce Featurized Bidirectional Generative Adversarial Networks (FBGAN), which extracts semantic features of inputs and filters the non-semantic perturbations. FBGAN is pre-trained on clean datasets in an unsupervised manner, adversarially learning a bidirectional mapping between the high-dimensional data space and the low-dimensional semantic space. After the bidirectional mapping, the adversarial data can be reconstructed to denoised data, which could be fed into any pre-trained classifier. We empirically show the quality of reconstruction images and the effectiveness of defense. |
URI: | http://arks.princeton.edu/ark:/88435/dsp013197xq182 |
Alternate format: | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu |
Type of Material: | Academic dissertations (Ph.D.) |
Language: | en |
Appears in Collections: | Applied and Computational Mathematics |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Bao_princeton_0181D_13702.pdf | 12.98 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.