DAEDALUS at ImageCLEF Medical Retrieval 2011: Textual, Visual and Multimodal Experiments
نویسندگان
چکیده
This paper describes the participation of DAEDALUS at ImageCLEF 2011 Medical Retrieval task. We have focused on multimodal (or mixed) experiments that combine textual and visual retrieval. The main objective of our research has been to evaluate the effect on the medical retrieval process of the existence of an extended corpus that is annotated with the image type, associated to both the image itself and also to its textual description. For this purpose, an image classifier has been developed to tag each document with its class (1st level of the hierarchy: Radiology, Microscopy, Photograph, Graphic, Other) and subclass (2nd level: AN, CT, MR, etc.). For the textual-based experiments, several runs using different semantic expansion techniques have been performed. For the visual-based retrieval, different runs are defined by the corpus used in the retrieval process and the strategy for obtaining the class and/or subclass. The best results are achieved in runs that make use of the image subclass based on the classification of the sample images. Although different multimodal strategies have been submitted, none of them has shown to be able to provide results that are at least comparable to the ones achieved by the textual retrieval alone. We believe that we have been unable to find a metric for the assessment of the relevance of the results provided by the visual and textual processes.
منابع مشابه
DUTH at ImageCLEF 2011 Wikipedia Retrieval
As digital information is increasingly becoming multimodal, the days of single-language text-only retrieval are numbered. Take as an example Wikipedia where a single topic may be covered in several languages and include non-textual media such as image, audio, and video. Moreover, non-textual media may be annotated with text in several languages in a variety of metadata fields such as object cap...
متن کاملMultimodal Medical Image Retrieval: Improving Precision at ImageCLEF 2009
We present results from Oregon Health & Science University’s participation in the medical retrieval task of ImageCLEF 2009. This year, we focused on improving retrieval performance, especially early precision, in the task of solving medical multimodal queries. These queries contain visual data, given as a set of image-examples, and textual data, provided as a set of words belonging to three dim...
متن کاملMultimodal Information Approaches for the Wikipedia Collection at ImageCLEF 2011
The main goal of this paper it is to present our experiments in ImageCLEF 2011 Campaign (Wikipedia retrieval task). This edition we focused on applying different strategies of merging multimodal information, textual and visual, following both early and late fusion approaches. Our best runs are in the top ten of the global list, at positions 8, 9 and 10 with MAP 0.3405, 0.3367 and 0.323, being t...
متن کاملFCSE at ImageCLEF 2012: Evaluating Techniques for Medical Image Retrieval
This paper presents the details of the participation of FCSE (Faculty of Computer Science and Engineering) research team in ImageCLEF 2012 medical retrieval task. We investigated by evaluating different weighting models for text retrieval. In the case of the visual retrieval, we focused on extracting low-level features and examining their performance. For, the multimodal retrieval we used late ...
متن کاملDEMIR at ImageCLEFMed 2011: Evaluation of Fusion Techniques for Multimodal Content-based Medical Image Retrieval
This paper present the details of participation of DEMIR (Dokuz Eylul University Multimedia Information Retrieval) research team to the context of our participation to the ImageCLEF 2011 Medical Retrieval task. This year, we evaluated fusion and re-ranking method which is based on the best low level feature of images with best text retrieval result. We improved results by examination of differe...
متن کامل