Adversarial Attack and Defense on Graph Data: A Survey

نویسندگان

چکیده

Deep neural networks (DNNs) have been widely applied to various applications, including image classification, text generation, audio recognition, and graph data analysis. However, recent studies shown that DNNs are vulnerable adversarial attacks. Though there several works about attack defense strategies on domains such as images natural language processing, it is still difficult directly transfer the learned knowledge due its representation structure. Given importance of analysis, an increasing number over past few years attempted analyze robustness machine learning models data. Nevertheless, existing research considering behaviors often focuses specific types attacks with certain assumptions. In addition, each work proposes own mathematical formulation, which makes comparison among different methods difficult. Therefore, this review intended provide overall landscape more than 100 papers for data, establish a unified formulation encompassing most models. Moreover, we also compare defenses along their contributions limitations, well summarize evaluation metrics, datasets future trends. We hope survey can help fill gap in literature facilitate further development promising new field.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Amplification and DRDoS Attack Defense - A Survey and New Perspectives

The severity of amplification attacks has grown in recent years. Since 2013 there have been at least two attacks which involved over 300Gbps of attack traffic. This paper offers an analysis of these and many other amplification attacks. We compare a wide selection of different proposals for detecting and preventing amplification attacks, as well as proposals for tracing the attackers. Since sou...

متن کامل

Adversarial Examples Generation and Defense Based on Generative Adversarial Network

We propose a novel generative adversarial network to generate and defend adversarial examples for deep neural networks (DNN). The adversarial stability of a network D is improved by training alternatively with an additional network G. Our experiment is carried out on MNIST, and the adversarial examples are generated in an efficient way compared with wildly-used gradient based methods. After tra...

متن کامل

Minimizing Attack Graph Data Structures

An attack graph is a data structure representing how an attacker can chain together multiple attacks to expand their influence within a network (often in an attempt to reach some set of goal states). Restricting attack graph size is vital for the execution of high degree polynomial analysis algorithms. However, we find that the most widely-cited and recently-used ‘condition/exploit’ attack grap...

متن کامل

Improving Attack Graph Visualization through Data Reduction and Attack Grouping

Abstract. Various tools exist to analyze enterprise network systems and to produce attack graphs detailing how attackers might penetrate into the system. These attack graphs, however, are often complex and difficult to comprehend fully, and a human user may find it problematic to reach appropriate configuration decisions. This paper presents methodologies that can 1) automatically identify port...

متن کامل

A genetic attack on the defense complex.

An increasing number of "non-model" organisms are becoming accessible to genetic analysis in the field, as evolutionary biologists develop dense molecular genetic maps. Peichel et al.'s recent study(1) provides a microsatellite-based map for threespine stickleback fish (Gasterosteus aculeatus), and the first evidence for QTL affecting feeding morphology and defensive armor. This species has und...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Knowledge and Data Engineering

سال: 2022

ISSN: ['1558-2191', '1041-4347', '2326-3865']

DOI: https://doi.org/10.1109/tkde.2022.3201243