Model Agnostic Defence Against Backdoor Attacks in Machine Learning

نویسندگان

چکیده

Machine learning (ML) has automated a multitude of our day-to-day decision-making domains, such as education, employment, and driving automation. The continued success ML largely depends on ability to trust the model we are using. Recently, new class attacks called backdoor have been developed. These undermine user’s in models. In this article, present Neo , agnostic framework detect mitigate image classification For given model, approach analyzes inputs it receives determines if is backdoored. addition feature, also these by determining correct predictions poisoned images. We implemented evaluated against three state-of-the-art evaluation, show that can $\approx$ 88% average fast 4.4 ms per input image. compare with defence methodologies proposed for attacks. Our evaluation reveals despite being blackbox approach, more effective thwarting than existing techniques. Finally, reconstruct exact user effectively test their systems.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Model-Agnostic Interpretability of Machine Learning

Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. I...

متن کامل

Cooperative Defence Against DDoS Attacks

Distributed denial of service (DDoS) attacks on the Internet have become an immediate problem. As DDoS streams do not have common characteristics, currently available intrusion detection systems (IDS) cannot detect them accurately. As a result, defend DDoS attacks based on current available IDS will dramatically affect legitimate traffic. In this paper, we propose a distributed approach to defe...

متن کامل

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access many security-sensitive applications like payment apps. Such usages of deep learning systems provide the adversaries with sufficient incentives to perform attack...

متن کامل

Noise-Tolerant Machine Learning Attacks against Physically Unclonable Functions

Along with the evolution of Physically Unclonable Functions (PUFs) numerous successful attacks against PUFs have been proposed in the literature. Among these are machine learning (ML) attacks, ranging from heuristic approaches to provable algorithms, that have attracted great attention. Nevertheless, the issue of dealing with noise has so far not been addressed in this context. Thus, it is not ...

متن کامل

Evasion Attacks against Machine Learning at Test Time

In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to syste...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Reliability

سال: 2022

ISSN: ['1558-1721', '0018-9529']

DOI: https://doi.org/10.1109/tr.2022.3159784