نتایج جستجو برای: captions

تعداد نتایج: 1268  

Journal: :Journal for the psychology of language learning 2021

This study investigated the extent to which individual differences in working memory (WM) mediate effects of captions with or without textual enhancement on attentional allocation and L2 grammatical development, whether development is influenced by WM absence captions. We employed a pretest-posttest-delayed posttest design, 72 Korean learners English randomly assigned three groups. The groups d...

Journal: :Cognitive Systems Research 2010
Eric G. Taylor John E. Hummel

The captions for Fig. 3C and D were reversed in the original version. This has now been updated (see below).

Journal: :Proceedings of the ... International Florida Artificial Intelligence Research Society Conference 2021

In this paper, we build a multi-style generative model for stylish image captioning which uses multi-modality features, ResNeXt and text features generated by DenseCap. We propose the 3M model, Multi-UPDOWN caption that encodes decodes them into captions. demonstrate effectiveness of our on generating human-like captions examining its performance two datasets, PERSONALITY-CAPTIONS dataset, Flic...

Journal: :Kinema: A Journal for Film and Audiovisual Media 1994

2015
Anadi Chaman

This project aims at generating captions for images using neural language models. There has been a substantial increase in number of proposed models for image captioning task since neural language models and convolutional neural networks(CNN) became popular. Our project has its base on one of such works, which uses a variant of Recurrent neural network coupled with a CNN. We intend to enhance t...

Journal: :CoRR 2017
Satoshi Tsutsui David J. Crandall

Recent work in computer vision has yielded impressive results in automatically describing images with natural language. Most of these systems generate captions in a single language, requiring multiple language-specific models to build a multilingual captioning system. We propose a very simple technique to build a single unified model across languages, using artificial tokens to control the lang...

2009
Guido Zuccon Teerapong Leelanupab Anuj Goyal Martin Halvey P. Punitha Joemon M. Jose

In this paper we describe the approaches adopted to generate the five runs submitted to ImageClefPhoto 2009 by the University of Glasgow. The aim of our methods is to exploit document diversity in the rankings. All our runs used text statistics extracted from the captions associated to each image in the collection, except one run which combines the textual statistics with visual features extrac...

2009
Andreas Lennartz Marc Pomplun

232 words Main text (including appendix): 2201 words Figure captions: 307 words Figures: 4 Tables: 0 References: 16

Journal: :Procedia Computer Science 2022

This work aims at generating captions for soccer videos using deep learning. The paper introduces a novel dataset, model, and triple-level evaluation. dataset consists of 22k caption-clip pairs three visual features (images, optical flow, inpainting) 500 hours SoccerNet videos. model is divided into parts: transformer learns language, ConvNets learn vision, fusion linguistic generates captions....

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید