نتایج جستجو برای: encoder neural networks
تعداد نتایج: 643221 فیلتر نتایج به سال:
All the existing image steganography methods use manually crafted features to hide binary payloads into cover images. This leads to small payload capacity and image distortion. Here we propose a convolutional neural network based encoder-decoder architecture for embedding of images as payload. To this end, we make following three major contributions: (i) we propose a deep learning based generic...
In this study, we employ skip connections into a deep recurrent neural network for modeling basic dance steps using audio as input. Our model consists of two blocks, one encodes the audio input sequences, and another generates the motion. The encoder uses a configuration called convolutional, long short-term memory deep neural network (CLDNN) which handle the power features of audio. Furthermor...
In this paper we propose an end-toend neural CRF autoencoder (NCRF-AE) model for semi-supervised learning of sequential structured prediction problems. Our NCRF-AE consists of two parts: an encoder which is a CRF model enhanced by deep neural networks, and a decoder which is a generative model trying to reconstruct the input. Our model has a unified structure with different loss functions for l...
Existing neural conversational models process natural language primarily on a lexico-syntactic level, thereby ignoring one of the most crucial components of human-to-human dialogue: its affective content. We take a step in this direction by proposing three novel ways to incorporate affective/emotional aspects into long short term memory (LSTM) encoder-decoder neural conversation models: (1) aff...
Nowadays the computer speed is much faster than before, however well-trained humans are still the best pattern recognizer. In this paper we propose a fingerprint recognition method which is based on humanoid algorithms. Because fingerprint patterns are fuzzy in nature and ridge endings are changed easily by scars, we try to only use ridge bifurcation as fingerprints minutiae and also design a “...
For Artificial Neural Networks (ANNs) to be effective modelling tools, they must draw upon biological characteristics: One characteristic often overlooked in the design of ANNs is the replication, or redundancy, of processes within the brain. This paper examines the effects of redundancy on the performance of ANNs trained on either a pattern classification task (e.g. parity, encoder) or a funct...
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems. However, the use of large vocabulary becomes the bottleneck in both training and improving the performan...
We present Char2Wav, an end-to-end model for speech synthesis. Char2Wav has two components: a reader and a neural vocoder. The reader is an encoderdecoder model with attention. The encoder is a bidirectional recurrent neural network that accepts text or phonemes as inputs, while the decoder is a recurrent neural network (RNN) with attention that produces vocoder acoustic features. Neural vocode...
The extended Backpropagation Through Time (eBPTT) learning algorithm for Segmented-Memory Recurrent Neural Networks (SMRNNs) yet lacks the ability to reliably learn long-term dependencies. The alternative learning algorithm, extended Real-Time Recurrent Learning (eRTRL), does not suffer this problem but is computational very intensive, such that it is impractical for the training of large netwo...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید