A Position-Aware Transformer for Image Captioning
نویسندگان
چکیده
Image captioning aims to generate a corresponding description of an image. In recent years, neural encoder-decoder models have been the dominant approaches, in which Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) are used translate image into natural language description. Among these visual attention mechanisms widely enable deeper understanding through fine-grained analysis even multiple steps reasoning. However, most conventional based on high-level features, ignoring effects other giving insufficient consideration relative positions between features. this work, we propose Position-Aware Transformer model with image-feature position-aware for above problems. The firstly extracts multi-level features by using Feature Pyramid (FPN), then utilizes scaled-dot-product fuse enables our detect objects different scales more effectively without increasing parameters. mechanism, obtained at first, afterwards incorporated original captions accurately. Experiments carried out MSCOCO dataset approach achieves competitive BLEU-4, METEOR, ROUGE-L, CIDEr scores compared some state-of-the-art demonstrating effectiveness approach.
منابع مشابه
Contrastive Learning for Image Captioning
Image captioning, a popular topic in computer vision, has achieved substantial progress in recent years. However, the distinctiveness of natural descriptions is often overlooked in previous work. It is closely related to the quality of captions, as distinctive captions are more likely to describe images with their unique aspects. In this work, we propose a new learning method, Contrastive Learn...
متن کاملStack-Captioning: Coarse-to-Fine Learning for Image Captioning
The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multistage prediction framework for image captioning, composed of multiple decoders each of which...
متن کاملPhrase-based Image Captioning
Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representat...
متن کاملDomain-Specific Image Captioning
We present a data-driven framework for image caption generation which incorporates visual and textual features with varying degrees of spatial structure. We propose the task of domain-specific image captioning, where many relevant visual details cannot be captured by off-the-shelf general-domain entity detectors. We extract previously-written descriptions from a database and adapt them to new q...
متن کاملConvolutional Image Captioning
Image captioning is an important but challenging task, applicable to virtual assistants, editing tools, image indexing, and support of the disabled. Its challenges are due to the variability and ambiguity of possible image descriptions. In recent years significant progress has been made in image captioning, using Recurrent Neural Networks powered by long-short-term-memory (LSTM) units. Despite ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computers, materials & continua
سال: 2022
ISSN: ['1546-2218', '1546-2226']
DOI: https://doi.org/10.32604/cmc.2022.019328