نتایج جستجو برای: discrete time neural networks dnns

تعداد نتایج: 2505214  

2013
Christian Szegedy Alexander Toshev Dumitru Erhan

Deep Neural Networks (DNNs) have recently shown outstanding performance on image classification tasks [14]. In this paper we go one step further and address the problem of object detection using DNNs, that is not only classifying but also precisely localizing objects of various classes. We present a simple and yet powerful formulation of object detection as a regression problem to object boundi...

2017
Ziming Zhang Matthew Brand

By lifting the ReLU function into a higher dimensional space, we develop a smooth multi-convex formulation for training feed-forward deep neural networks (DNNs). This allows us to develop a block coordinate descent (BCD) training algorithm consisting of a sequence of numerically well-behaved convex optimizations. Using ideas from proximal point methods in convex analysis, we prove that this BCD...

2018
Daiki Tanaka Daiki Ikami Toshihiko Yamasaki Kiyoharu Aizawa

Deep neural networks (DNNs) trained on large-scale datasets have exhibited significant performance in image classification. Many large-scale datasets are collected from websites, however they tend to contain inaccurate labels that are termed as noisy labels. Training on such noisy labeled datasets causes performance degradation because DNNs easily overfit to noisy labels. To overcome this probl...

Journal: :CoRR 2017
Chun Yang Xu-Cheng Yin Zejun Li Jianwei Wu Chunchao Guo Hongfa Wang Lei Xiao

Recognizing text in the wild is a really challenging task because of complex backgrounds, various illuminations and diverse distortions, even with deep neural networks (convolutional neural networks and recurrent neural networks). In the end-to-end training procedure for scene text recognition, the outputs of deep neural networks at different iterations are always demonstrated with diversity an...

2016
Naoki Hosaka Kei Hashimoto Keiichiro Oura Yoshihiko Nankaku Keiichi Tokuda

This paper proposes a new training method of deep neural networks (DNNs) for statistical voice conversion. DNNs are now being used as conversion models that represent mapping from source features to target features in statistical voice conversion. However, there are two major problems to be solved in conventional DNN-based voice conversion: 1) the inconsistency between the training and synthesi...

Journal: :Transportation Research Part B-methodological 2021

Researchers often treat data-driven and theory-driven models as two disparate or even conflicting methods in travel behavior analysis. However, the are highly complementary because more predictive but less interpretable robust, while robust predictive. Using their nature, this study designs a theory-based residual neural network (TB-ResNet) framework, which synergizes discrete choice (DCMs) dee...

2016
Zhiting Hu Zichao Yang Ruslan Salakhutdinov Eric P. Xing

Regulating deep neural networks (DNNs) with human structured knowledge has shown to be of great benefit for improved accuracy and interpretability. We develop a general framework that enables learning knowledge and its confidence jointly with the DNNs, so that the vast amount of fuzzy knowledge can be incorporated and automatically optimized with little manual efforts. We apply the framework to...

2015
Vijayaditya Peddinti Daniel Povey Sanjeev Khudanpur

Recurrent neural network architectures have been shown to efficiently model long term temporal dependencies between acoustic events. However the training time of recurrent networks is higher than feedforward networks due to the sequential nature of the learning algorithm. In this paper we propose a time delay neural network architecture which models long term temporal dependencies with training...

Journal: :Pattern Recognition Letters 2017
Aren Jansen Gregory Sell Vince Lyzinski

Several popular graph embedding techniques for representation learning and dimensionality reduction rely on performing computationally expensive eigendecompositions to derive a nonlinear transformation of the input data space. The resulting eigenvectors encode the embedding coordinates for the training samples only, preventing the transformation of novel data samples without recomputation. In t...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید