Deep neural networks (DNNs) have been shown to be vulnerable against adversarial examples (AEs), which are maliciously designed cause dramatic model output errors. In this work, we reveal that normal (NEs) insensitive the fluctuations occurring at highly-curved region of decision boundary, while AEs typically over one single domain (mostly spatial domain) exhibit exorbitant sensitivity on such ...