Robust sensible adversarial learning of deep neural networks for image classification

نویسندگان

چکیده

The idea of robustness is central and critical to modern statistical analysis. However, despite the recent advances deep neural networks (DNNs), many studies have shown that DNNs are vulnerable adversarial attacks. Making imperceptible changes an image can cause DNN models make wrong classification with high confidence, such as classifying a benign mole malignant tumor stop sign speed limit sign. trade-off between standard accuracy common for models. In this paper we introduce sensible learning demonstrate synergistic effect pursuits natural robustness. Specifically, define adversary, which useful robust model, while keeping accuracy. We theoretically establish Bayes classifier most multiclass 0−1 loss under learning. propose novel efficient algorithm trains model using implicit truncation. apply large-scale handwritten digital dataset, called MNIST, object recognition colored CIFAR10. performed extensive comparative study compare our method other competitive methods. Our experiments empirically not sensitive its hyperparameter does collapse even small capacity promoting against various attacks software available Python package at https://github.com/JungeumKim/SENSE.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Porosity classification from thin sections using image analysis and neural networks including shallow and deep learning in Jahrum formation

The porosity within a reservoir rock is a basic parameter for the reservoir characterization. The present paper introduces two intelligent models for identification of the porosity types using image analysis. For this aim, firstly, thirteen geometrical parameters of pores of each image were extracted using the image analysis techniques. The extracted features and their corresponding pore types ...

متن کامل

Adversarial Perturbations Against Deep Neural Networks for Malware Classification

Deep neural networks, like many other machine learning models, have recently been shown to lack robustness against adversarially crafted inputs. These inputs are derived from regular inputs by minor yet carefully selected perturbations that deceive machine learning models into desired misclassifications. Existing work in this emerging field was largely specific to the domain of image classifica...

متن کامل

Adversarial Multi-Task Learning of Deep Neural Networks for Robust Speech Recognition

A method of learning deep neural networks (DNNs) for noise robust speech recognition is proposed. It is widely known that representations (activations) of well-trained DNNs are highly invariant to noise, especially in higher layers, and such invariance leads to the noise robustness of DNNs. However, little is known about how to enhance such invariance of representations, which is a key for impr...

متن کامل

Cystoscopy Image Classication Using Deep Convolutional Neural Networks

In the past three decades, the use of smart methods in medical diagnostic systems has attractedthe attention of many researchers. However, no smart activity has been provided in the eld ofmedical image processing for diagnosis of bladder cancer through cystoscopy images despite the highprevalence in the world. In this paper, two well-known convolutional neural networks (CNNs) ...

متن کامل

Robust Deep Reinforcement Learning with Adversarial Attacks

This paper proposes adversarial attacks for Reinforcement Learning (RL) and then improves the robustness of Deep Reinforcement Learning algorithms (DRL) to parameter uncertainties with the help of these attacks. We show that even a naively engineered attack successfully degrades the performance of DRL algorithm. We further improve the attack using gradient information of an engineered loss func...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: The Annals of Applied Statistics

سال: 2023

ISSN: ['1941-7330', '1932-6157']

DOI: https://doi.org/10.1214/22-aoas1637