Self-Progressing Robust Training

نویسندگان

چکیده

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy machine learning systems. Current robust training methods such as explicitly uses an ``attack'' (e.g., l_infty-norm bounded perturbation) to generate examples during for improving robustness. In this paper, we take different perspective propose framework SPROUT, self-progressing training. During training, SPROUT progressively adjusts label distribution via our proposed parametrized smoothing technique, making free of attack generation more scalable. We also motivate using general formulation based on vicinity risk minimization, which includes many special cases. Compared with state-of-the-art (PGD-l_infty TRADES) attacks various invariance tests, consistently attains superior performance scalable large neural networks. Our results shed light scalable, effective attack-independent methods.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

using game theory techniques in self-organizing maps training

شبکه خود سازمانده پرکاربردترین شبکه عصبی برای انجام خوشه بندی و کوانتیزه نمودن برداری است. از زمان معرفی این شبکه تاکنون، از این روش در مسائل مختلف در حوزه های گوناگون استفاده و توسعه ها و بهبودهای متعددی برای آن ارائه شده است. شبکه خودسازمانده از تعدادی سلول برای تخمین تابع توزیع الگوهای ورودی در فضای چندبعدی استفاده می کند. احتمال وجود سلول مرده مشکلی اساسی در الگوریتم شبکه خودسازمانده به حسا...

Robust and Reliable Feature Extractor Training by Using Unsupervised Pre-training with Self-Organization Map

Recent research has shown that deep neural network is very powerful for object recognition task. However, training the deep neural network with more than two hidden layers is not easy even now because of regularization problem. To overcome such a regularization problem, some techniques like dropout and de-noising were developed. The philosophy behind de-noising is to extract more robust feature...

متن کامل

Improved Training for Self-Training

It is well known that for some tasks, labeled data sets may be hard to gather. Self-training, or pseudo-labeling, tackles the problem of having insufficient training data. In the self-training scheme, the classifier is first trained on a limited, labeled dataset, and after that, it is trained on an additional, unlabeled dataset, using its own predictions as labels, provided those predictions ar...

متن کامل

Cross-domain robust acoustic training

This paper describes our efforts towards cross-domain acoustic training for Large Vocabulary Continuous Speech Recognition (LVCSR) systems. We used weighted multi-style training by pooling insufficient telephony landline and cellular data with down sampled wide band clean data to develop better hybrid acoustic models. We explored the effects on decision tree size to accuracy by approximately 10...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i8.16874