Implicit adversarial data augmentation and robustness with Noise-based Learning

نویسندگان

چکیده

We introduce a Noise-based Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks. find the learning of random noise introduced with input same loss function used during posterior maximization, improves model’s resistance. show learnt performs implicit data augmentation boosting adversary generalization capability. evaluate our approach’s efficacy and provide simplistic visualization tool understanding data, using Principal Component Analysis. conduct comprehensive experiments on prevailing benchmarks such as MNIST, CIFAR10, CIFAR100, Tiny ImageNet remarkably well against wide range Furthermore, combining NoL state-of-the-art defense mechanisms, training, consistently outperforms prior techniques in both white-box black-box

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Simultaneous Learning and Covering with Adversarial Noise

We study simultaneous learning and covering problems: submodular set cover problems that depend on the solution to an active (query) learning problem. The goal is to jointly minimize the cost of both learning and covering. We extend recent work in this setting to allow for a limited amount of adversarial noise. Certain noisy query learning problems are a special case of our problem. Crucial to ...

متن کامل

Data Augmentation Generative Adversarial Networks

Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation (Krizhevsky et al., 2012) alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of...

متن کامل

CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training

We propose an adversarial training procedure for learning a causal implicit generative model for a given causal graph. We show that adversarial training can be used to learn a generative model with true observational and interventional distributions if the generator architecture is consistent with the given causal graph. We consider the application of generating faces based on given binary labe...

متن کامل

Robustness of classifiers: from adversarial to random noise

Several recent works have shown that state-of-the-art classifiers are vulnerable to worst-case (i.e., adversarial) perturbations of the datapoints. On the other hand, it has been empirically observed that these same classifiers are relatively robust to random noise. In this paper, we propose to study a semi-random noise regime that generalizes both the random and worst-case noise regimes. We pr...

متن کامل

Deep Adversarial Robustness

Deep learning has recently contributed to learning state-of-the-art representations in service of various image recognition tasks. Deep learning uses cascades of many layers of nonlinear processing units for feature extraction and transformation. Recently, researchers have shown that deep learning architectures are particularly vulnerable to adversarial examples, inputs to machine learning mode...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neural Networks

سال: 2021

ISSN: ['1879-2782', '0893-6080']

DOI: https://doi.org/10.1016/j.neunet.2021.04.008