Demystifying the Transferability of Adversarial Attacks in Computer Networks

نویسندگان

چکیده

Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively in both academia industry. Recent studies demonstrated that adversarial attacks against such can maintain their effectiveness even when on other than targeted by attacker. This major property is known as transferability, makes CNNs ill-suited for security applications. In this paper, we provide first comprehensive study which assesses robustness CNN-based computer networks transferability. Furthermore, investigate whether transferability issue holds our experiments, consider five different attacks: Iterative Fast Gradient Method (I-FGSM), Jacobian-based Saliency Map (JSMA), Limited-memory Broyden Fletcher Goldfarb Shanno BFGS (L-BFGS), Projected Descent (PGD), DeepFool attack. Then, perform these three well-known datasets: Network-based Detection IoT (N-BaIoT) dataset, Domain Generating Algorithms (DGA) RIPE Atlas dataset. Our experimental results show clearly happens specific use cases I-FGSM, JSMA, LBFGS scenarios, attack success rate target network range from 63.00% to 100%. Finally, suggest two shielding strategies hinder considering Most Powerful Attacks (MPAs), mismatch LSTM architecture.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Many machine learning models are vulnerable to adversarial examples: inputs that are specially crafted to cause a machine learning model to produce an incorrect output. Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task....

متن کامل

Stability in Heterogeneous Multimedia Networks under Adversarial Attacks

A distinguishing feature of today's large-scale platforms for multimedia distribution and communication, such as the Internet, is their heterogeneity, predominantly manifested by the fact that a variety of communication protocols are simultaneously running over different hosts. A fundamental question that naturally arises for such common settings of heterogeneous multimedia systems concerns the...

متن کامل

Understanding and Enhancing the Transferability of Adversarial Examples

State-of-the-art deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any ...

متن کامل

Biologically inspired protection of deep networks from adversarial attacks

Inspired by biophysical principles underlying nonlinear dendritic computation in neural circuits, we develop a scheme to train deep neural networks to make them robust to adversarial attacks. Our scheme generates highly nonlinear, saturated neural networks that achieve state of the art performance on gradient based adversarial examples on MNIST, despite never being exposed to adversarially chos...

متن کامل

Blocking Transferability of Adversarial Examples in Black-Box Learning Systems

Advances in Machine Learning (ML) have led to its adoption as an integral component in many applications, including banking, medical diagnosis, and driverless cars. To further broaden the use of ML models, cloud-based services offered by Microsoft, Amazon, Google, and others have developed ML-as-a-service tools as black-box systems. However, ML classifiers are vulnerable to adversarial examples...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Network and Service Management

سال: 2022

ISSN: ['2373-7379', '1932-4537']

DOI: https://doi.org/10.1109/tnsm.2022.3164354