نتایج جستجو برای: encoder neural networks
تعداد نتایج: 643221 فیلتر نتایج به سال:
Although traditionally used in the machine translation field, the encoder-decoder framework has been recently applied for the generation of video and image descriptions. The combination of Convolutional and Recurrent Neural Networks in these models has proven to outperform the previous state of the art, obtaining more accurate video descriptions. In this work we propose pushing further this mod...
In this paper, we propose a novel neural network model called RNN Encoder– Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability...
In this paper we discuss a method for semi-supervised training of CNNs. By using auto-encoders to extract features from unlabeled images, we can train CNNs to accurately classify images with only a small set of labeled images. We show our method’s results on a shallow CNN using the CIFAR-10 dataset, and some preliminary results on a VGG-16 network using the STL-10 dataset.
Recently many promising results have been shown on face recognition related problems. However, ageinvariant face recognition and retrieval remains a challenge. Inspired by the observation that age variation is a nonlinear but smooth transform and the ability of auto-encoder network to learn latent representations from inputs, in this paper, we propose a new neural network model called coupled a...
We propose an approach for forecasting video of complex human activity involving multiple people. Direct pixellevel prediction is too simple to handle the appearance variability in complex activities. Hence, we develop novel intermediate representations. An architecture combining a hierarchical temporal model for predicting human poses and encoder-decoder convolutional neural networks for rende...
Images typically contain smooth regions, which are easily compressed by linear transforms, and high activity regions (edges, textures), which are harder to compress. To compress the first kind, we use a “zero” encoder that has infinite context, very low capacity, and which adapts very quickly to the content. For the second, we use an “interpolation” encoder, based on neural networks, which has ...
We develop a new framework for studying the biases that recurrent neural networks bring to language processing tasks. A semantic concept represented by a point in Euclidian space is translated into a symbol sequence by an encoder network. This sequence is then fed to a decoder network which attempts to translate it back to the original concept. We show how a pair of recurrent networks acting as...
Deep neural networks have advanced the state-of-the-art in automatic speech recognition, when combined with hidden Markov models (HMMs). Recently there has been interest in using systems based on recurrent neural networks (RNNs) to perform sequence modelling directly, without the requirement of an HMM superstructure. In this paper, we study the RNN encoder-decoder approach for large vocabulary ...
Sequential Recommendation through Graph Neural Networks and Transformer Encoder with Degree Encoding
Predicting users’ next behavior through learning preferences according to the historical behaviors is known as sequential recommendation. In this task, sequence representation by modeling pairwise relationship between items in capture their long-range dependencies crucial. paper, we propose a novel deep neural network named graph convolutional transformer recommender (GCNTRec). GCNTRec capable ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید