Multimodal Mood-based Annotation
نویسندگان
چکیده
The paper presents an architecture for multimodal mood-based annotation systems. The architecture aims at the implementation of interactive multimodal systems to support communities of users in the creation and management of annotations in locative media projects. The annotations are multimodal in that they can be created and accessed through visual and audio interaction. The annotations are mood-based in that they reflect the mood of the user respect to the point of interest s/he is commenting. The paper presents a definition of multimodal mood-based annotation and a description of the architecture, illustrating in particular the interaction process between users and systems through the audio interface. A concrete application of the architecture is presented: an annotative locative media project aimed at supporting tourists in creating annotations related to the Valchiavenna valley in Italy. Key-Words: Communication, Multimedia, Multimodal, Audio, Annotation, Mood, Speech recognition
منابع مشابه
Mymtv: a Personalized and Interactive Music Channel
In this document, we present MyMTV, an Interactive TV which adapts to one’s habits, tastes and moods: A music recommendation channel, tailored to the users tastes. Users create a personalized channel of videos by simply selecting a song or an artist they like. The system identifies and log what music video the user is watching. Based on this information, the system builds a user profile to impr...
متن کاملSIDGrid: A Framework for Distributed, Integrated Multimodal Annotation, Archiving, and Analysis
The SIDGrid architecture provides a framework for distributed annotation, archiving, and analysis of the rapidly growing volume of multimodal data. The framework integrates three main components: an annotation and analysis client, a web-accessible data repository, and a portal to the distributed processing capability of the TeraGrid. The architecture provides both a novel integration of annotat...
متن کاملThe Good, the Bad, and the Angry: Analyzing Crowdsourced Impressions of Vloggers
We address the study of interpersonal perception in social conversational video based on multifaceted impressions collected from short video-watching. First, we crowdsourced the annotation of personality, attractiveness, and mood impressions for a dataset of YouTube vloggers, generating a corpora that has potential to develop automatic techniques for vlogger characterization. Then, we provide a...
متن کاملAn XML-Based Implementation of Multimodal Affective Annotation
In simple cases, affective computing is a computational device recognizing and acting upon the emotions of its user or having (or simulating having) emotions of its own in complex cases. Multimodal technology is currently one of the hottest focuses in affective computing research. However, the lack of a large-scale multimodal database limits the research to some respective and scattered fields,...
متن کاملAn Exchange Format for Multimodal Annotations
This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation of multimodality. We propose a multimodal annotation exchange format, based on the annotation graph formalism, which is supported by import and export routines in the respective tools.
متن کامل