نتایج جستجو برای: captions

تعداد نتایج: 1268  

2010
Miguel E. Ruiz Jiangping Chen Karthikeyan Pasupathy Pok Chin Ryan Knudson

This paper presents the results of the team of the University of North Texas in the Wikipedia image retrieval track of Image-CLEF-2010. Our approach is based on performing translation of the French and German image captions to English and using of Language Models for generating our runs. We also explore the use of complex queries by asking two users to manually build queries based on the origin...

Journal: :CoRR 2017
Chih-Yao Ma Asim Kadav Iain Melvin Zsolt Kira Ghassan Al-Regib Hans Peter Graf

We address the problem of video captioning by grounding language generation on object interactions in the video. Existing work mostly focuses on overall scene understanding with often limited or no emphasis on object interactions to address the problem of video understanding. In this paper, we propose SINet-Caption that learns to generate captions grounded over higher-order interactions between...

Journal: :Kajian linguistik dan sastra 2021

Speech acts are utterances that contain action as a function of communication considers aspects the speech situation. The objective this research is to analyze type act found in Instagram Captions ‘WHO Indonesia”. This uses descriptive qualitative research. There 332 data which contains some types captions Indonesia”, they directive, representative and expressive acts. 1) Directive performed so...

Journal: :Studies in Second Language Acquisition 2021

Abstract To probe the limits of attention raising through form-focused instruction, second-language research must adapt to needs a technologically driven learning environment. In this study, we used randomized control design investigate effect captioned media on vocabulary and grammar in L2 Spanish ( n = 369 learners). Through four data-collection sessions, participants were presented with gram...

Journal: :IEEE Access 2023

Within the museum community, automatic generation of artwork description is expected to accelerate improvement accessibility for visually impaired visitors. Captions that describe artworks should be based on emotions because art inseparable from viewers’ emotional reactions. By contrast, typically do not have unique interpretations; thus, it difficult systems reflect specified in captions preci...

Journal: :Chinese Journal of Systems Engineering and Electronics 2023

In the field of satellite imagery, remote sensing image captioning (RSIC) is a hot topic with challenge overfitting and difficulty text alignment. To address these issues, this paper proposes vision-language aligning paradigm for RSIC to jointly represent vision language. First, new dataset DIOR-Captions built augmenting object detection in optical (DIOR) images manually annotated Chinese Engli...

Journal: :CoRR 2017
Arjun Chandrasekaran Devi Parikh Mohit Bansal

Wit is a quintessential form of rich interhuman interaction, and is often grounded in a specific situation (e.g., a comment in response to an event). In this work, we attempt to build computational models that can produce witty descriptions for a given image. Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns. We compare our approach against ...

2018
Ruotian Luo Brian Price Scott Cohen Gregory Shakhnarovich

•ATTN models better than FC models, and discriminability objective works for both. •ATTN+CIDEr+* combination is our best choice •Moderate λ = 1 produces good tradeoff between discriminability and fluency •Higher λ make captions more discriminative to machine and to humans, but at the cost of fluency •With moderate λ, non-discriminative scores like BLEU, METEOR, CIDEr improve as well! • especial...

Journal: :IEEE Transactions on Geoscience and Remote Sensing 2021

Deep neural networks (DNNs) have been recently found popular for image captioning problems in remote sensing (RS). Existing DNN-based approaches rely on the availability of a training set made up high number RS images with their captions. However, captions may contain redundant information (they can be repetitive or semantically similar to each other), resulting deficiency while learning mappin...

Journal: :Advances in transdisciplinary engineering 2022

Image Captioning has gained tremendous spotlight in recent years. However, the captioning models generate captions English language. In this paper, we present an image caption generator for our regional language that is Hindi using Resnet50 and LSTM with attention module. An experimental study shown highlighting effect of attention-based learning on generated captions. Flickr8k dataset used to ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید