Despite the large success of deep neural networks (DNN) in recent years, most still lack mathematical guarantees terms stability. For instance, DNNs are vulnerable to small or even imperceptible input perturbations, so called adversarial examples, that can cause false predictions. This instability have severe consequences applications which influence health and safety humans, e.g., biomedical i...