Vax-a-Net: Training-Time Defence Against Adversarial Patch Attacks
نویسندگان
چکیده
We present Vax-a-Net; a technique for immunizing convolutional neural networks (CNNs) against adversarial patch attacks (APAs). APAs insert visually overt, local regions (patches) into an image to induce misclassification. introduce conditional Generative Adversarial Network (GAN) architecture that simultaneously learns synthesise patches use in APAs, whilst exploiting those adapt pre-trained target CNN reduce its susceptibility them. This approach enables resilience be conferred models, which would impractical with conventional training due the slow convergence of APA methods. demonstrate transferability this protection defend existing and show efficacy across several contemporary architectures.
منابع مشابه
A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN’s resilience to adversarial attacks, namely, adversarial training. Our experiments show that d...
متن کاملCooperative Defence Against DDoS Attacks
Distributed denial of service (DDoS) attacks on the Internet have become an immediate problem. As DDoS streams do not have common characteristics, currently available intrusion detection systems (IDS) cannot detect them accurately. As a result, defend DDoS attacks based on current available IDS will dramatically affect legitimate traffic. In this paper, we propose a distributed approach to defe...
متن کاملEnsemble Adversarial Training: Attacks and Defenses
Machine learning models are vulnerable to adversarial examples, inputs maliciously perturbed to mislead the model. These inputs transfer between models, thus enabling black-box attacks against deployed models. Adversarial training increases robustness to attacks by injecting adversarial examples into training data. Surprisingly, we find that although adversarially trained models exhibit strong ...
متن کاملDivide, Denoise, and Defend against Adversarial Attacks
Deep neural networks, although shown to be a successful class of machine learning algorithms, are known to be extremely unstable to adversarial perturbations. Improving the robustness of neural networks against these attacks is important, especially for security-critical applications. To defend against such attacks, we propose dividing the input image into multiple patches, denoising each patch...
متن کاملDefending Non-Bayesian Learning against Adversarial Attacks
Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local le...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2021
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-69538-5_15