Adversarial training is an effective way to defend deep neural networks (DNN) against adversarial examples. However, there are atypical samples that rare and hard learn, or even hurt DNNs' generalization performance on test data. In this paper, we propose a novel algorithm reweight the based self-supervised techniques mitigate negative effects of samples. Specifically, memory bank built record ...