Machine learning systems are vulnerable to adversarial attack. By applying the input object a small, carefully-designed perturbation, classifier can be tricked into making an incorrect prediction. This phenomenon has drawn wide interest, with many attempts made explain it. However, complete understanding is yet emerge. In this paper we adopt slightly different perspective, still relevant classi...