نتایج جستجو برای: discrete time neural networks dnns
تعداد نتایج: 2505214 فیلتر نتایج به سال:
Speech separation can be formulated as learning to estimate a time-frequency mask from acoustic features extracted from noisy speech. For supervised speech separation, generalization to unseen noises and unseen speakers is a critical issue. Although deep neural networks (DNNs) have been successful in noise-independent speech separation, DNNs are limited in modeling a large number of speakers. T...
In this paper, we present a time-contrastive learning (TCL) based unsupervised bottleneck (BN) feature extraction method for speech signals with an application to speaker verification. The method exploits the temporal structure of a speech signal and more specifically, it trains deep neural networks (DNNs) to discriminate temporal events obtained by uniformly segmenting the signal without using...
This chapter is devoted to the analysis of the complex dynamics exhibited by twodimensional discrete-time delayed Hopfield-type neural networks. Since the pioneering work of (Hopfield, 1982; Tank & Hopfield, 1986), the dynamics of continuous-time Hopfield neural networks have been thoroughly analyzed. In implementing the continuous-time neural networks for practical problems such as image proce...
Deep neural networks (DNNs) have achieved exceptional performances in many tasks, particularly, in supervised classification tasks. However, achievements with supervised classification tasks are based on large datasets with well-separated classes. Typically, real-world applications involve wild datasets that include similar classes; thus, evaluating similarities between classes and understandin...
We present a systematic weight pruning framework of deep neural networks (DNNs) using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a constrained nonconvex optimization problem, and then adopt the ADMM framework for systematic weight pruning. We show that ADMM is highly suitable for weight pruning due to the computational effici...
This paper proposes a set of new error criteria and learning approaches, Adaptive Normalized Risk-Averting Training (ANRAT), to attack the non-convex optimization problem in training deep neural networks (DNNs). Theoretically, we demonstrate its effectiveness on global and local convexity lower-bounded by the standard Lp-norm error. By analyzing the gradient on the convexity index λ, we explain...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید