نتایج جستجو برای: encoder neural networks

تعداد نتایج: 643221  

2017
Junfeng Hou Shiliang Zhang Li-Rong Dai

Recently end-to-end speech recognition has obtained much attention. One of the popular models to achieve end-to-end speech recognition is attention based encoder-decoder model, which usually generating output sequences iteratively by attending the whole representations of the input sequences. However, predicting outputs until receiving the whole input sequence is not practical for online or low...

2013
Dong-Hyun Lee

We propose the simple and efficient method of semi-supervised learning for deep neural networks. Basically, the proposed network is trained in a supervised fashion with labeled and unlabeled data simultaneously. For unlabeled data, Pseudo-Labels, just picking up the class which has the maximum predicted probability, are used as if they were true labels. This is in effect equivalent to Entropy R...

2016
Ramzi Ben Ali Ridha Ejbali Mourad Zaied

Dental caries, also known as dental cavities, is the most widespread pathology in the world. Up to a very recent period, almost all individuals had the experience of this pathology at least once in their life. Early detection of dental caries can help in a sharp decrease in the dental disease rate. Thanks to the growing accessibility to medical imaging, the clinical applications now have better...

2016
Oyebade K. Oyedotun Kamil Dimililer

The ability of the human visual processing system to accommodate and retain clear understanding or identification of patterns irrespective of their orientations is quite remarkable. Conversely, pattern invariance, a common problem in intelligent recognition systems is not one that can be overemphasized; obviously, one‘s definition of an intelligent system broadens considering the large variabil...

Journal: :CoRR 2016
Iulian Serban Ryan Lowe Laurent Charlin Joelle Pineau

Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and ...

Journal: :CoRR 2016
Bugra Tekin Isinsu Katircioglu Mathieu Salzmann Vincent Lepetit Pascal Fua

Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from image to 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Lea...

Journal: :Journal of Computational Physics 2023

We analyze the regression accuracy of convolutional neural networks assembled from encoders, decoders and skip connections trained with multifidelity data. Besides requiring significantly less trainable parameters than equivalent fully connected networks, encoder, decoder, encoder-decoder or decoder-encoder architectures can learn mapping between inputs to outputs arbitrary dimensionality. demo...

Journal: :CoRR 2018
Dushyanta Dhyani

We describe our system for SemEval-2018 Shared Task on Semantic Relation Extraction and Classification in Scientific Papers where we focus on the Classification task. Our simple piecewise convolution neural encoder performs decently in an end to end manner. A simple inter-task data augmentation significantly boosts the performance of the model. Our best-performing systems stood 8th out of 20 te...

2015
Sridhar S

Work had always been under process to design efficient algorithms for image compression based on various conventional and soft computing methodologies. This paper aims at exploring the application of multi layered perceptron (MLP) feed forward neural networks (FFNN), wavelet transforms and their combination architectures for image compression. Initially two neural network architectures for imag...

2017
Joost Bastings Ivan Titov Wilker Aziz Diego Marcheggiani Khalil Sima'an

We present a simple and effective approach to incorporating syntactic structure into neural attention-based encoderdecoder models for machine translation. We rely on graph-convolutional networks (GCNs), a recent class of neural networks developed for modeling graph-structured data. Our GCNs use predicted syntactic dependency trees of source sentences to produce representations of words (i.e. hi...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید