Fast Learning with Noise in Deep Neural Nets
نویسندگان
چکیده
Dropout has been raised as an effective and simple trick [1] to combat overfitting in deep neural nets. The idea is to randomly mask out input and internal units during training. Despite its usefulness, there has been very little and scattered understanding on injecting noise to deep learning architectures’ internal units. In this paper, we study the effect of dropout on both input and hidden layers in deep neural nets via explicit formulation of an equivalent marginalization regularizer. We show that training with regularizer from marginalized noise in deep neural nets doesn’t loose much performance compared to dropout, yet in significantly shorter amount of training time and noticeably less sensitivity to hyperparameter tuning, which are main practical concerns of dropout.
منابع مشابه
Adaptive Filtering Strategy to Remove Noise from ECG Signals Using Wavelet Transform and Deep Learning
Introduction: Electrocardiogram (ECG) is a method to measure the electrical activity of the heart which is performed by placing electrodes on the surface of the body. Physicians use observation tools to detect and diagnose heart diseases, the same is performed on ECG signals by cardiologists. In particular, heart diseases are recognized by examining the graphic representation of heart signals w...
متن کاملAdaptive Filtering Strategy to Remove Noise from ECG Signals Using Wavelet Transform and Deep Learning
Introduction: Electrocardiogram (ECG) is a method to measure the electrical activity of the heart which is performed by placing electrodes on the surface of the body. Physicians use observation tools to detect and diagnose heart diseases, the same is performed on ECG signals by cardiologists. In particular, heart diseases are recognized by examining the graphic representation of heart signals w...
متن کاملSolving Fuzzy Equations Using Neural Nets with a New Learning Algorithm
Artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. This paper mainly intends to offer a novel method for finding a solution of a fuzzy equation that supposedly has a real solution. For this scope, we applied an architecture of fuzzy neural networks such that the corresponding connection weights are real numbers. The ...
متن کاملSolving Fuzzy Equations Using Neural Nets with a New Learning Algorithm
Artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. This paper mainly intends to offer a novel method for finding a solution of a fuzzy equation that supposedly has a real solution. For this scope, we applied an architecture of fuzzy neural networks such that the corresponding connection weights are real numbers. The ...
متن کاملOn Fast Deep Nets for AGI Vision
Artificial General Intelligence will not be general without computer vision. Biologically inspired adaptive vision models have started to outperform traditional pre-programmed methods: our fast deep / recurrent neural networks recently collected a string of 1st ranks in many important visual pattern recognition benchmarks: IJCNN traffic sign competition, NORB, CIFAR10, MNIST, three ICDAR handwr...
متن کامل