Enhancing transferability of adversarial examples via rotation‐invariant attacks

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Understanding and Enhancing the Transferability of Adversarial Examples

State-of-the-art deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any ...

متن کامل

Blocking Transferability of Adversarial Examples in Black-Box Learning Systems

Advances in Machine Learning (ML) have led to its adoption as an integral component in many applications, including banking, medical diagnosis, and driverless cars. To further broaden the use of ML models, cloud-based services offered by Microsoft, Amazon, Google, and others have developed ML-as-a-service tools as black-box systems. However, ML classifiers are vulnerable to adversarial examples...

متن کامل

Improving Transferability of Adversarial Examples with Input Diversity

Though convolutional neural networks have achieved stateof-the-art performance on various vision tasks, they are extremely vulnerable to adversarial examples, which are obtained by adding humanimperceptible perturbations to the original images. Adversarial examples can thus be used as an useful tool to evaluate and select the most robust models in safety-critical applications. However, most of ...

متن کامل

Detecting Adversarial Examples via Neural Fingerprinting

Deep neural networks are vulnerable to adversarial examples, which dramatically alter model output using small input changes. We propose NeuralFingerprinting, a simple, yet effective method to detect adversarial examples by verifying whether model behavior is consistent with a set of secret fingerprints, inspired by the use of biometric and cryptographic signatures. The benefits of our method a...

متن کامل

Delving into Transferable Adversarial Examples and Black-box Attacks

An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferabilit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IET Computer Vision

سال: 2021

ISSN: 1751-9632,1751-9640

DOI: 10.1049/cvi2.12054