نتایج جستجو برای: multimodal input

تعداد نتایج: 250965  

Journal: :Human-Computer Interaction 1997
Sharon L. Oviatt Wolfgang Wahlster

The growing emphasis on multimodal interface design is fundamentally inspired by the aim to support natural, flexible, efficient, and powerfully expressive means of human-computer interaction that are easy to learn and use. Multimodal interfaces represent a new direction for computing that draws from the myriad input and output technologies becoming available, and that potentially can integrate...

Journal: :Brain research. Cognitive brain research 2002
Paul Patton Kamel Belkacem-Boussaid Thomas J Anastasio

The deep superior colliculus (DSC) integrates multisensory input and triggers an orienting movement toward the source of stimulation (target). It would seem reasonable to suppose that input of an additional modality should always increase the amount of information received by a DSC neuron concerning a target. However, of all DSC neurons studied, only about one half in the cat and one-quarter in...

2003
J. A. González-Bernal C. A. Reyes-García

The objective of our work is the development of a natural language dialogue system for information retrieval with multimodal input and multimedia output. Overall, the system consists of three phases: input analysis, information and knowledge management and output generation. The dialogue system is designed for consulting old Mexican historical documents. In this paper we describe the designed a...

H. Nezamabadi-pour M. B. Dowlatshahi V. Derhami

In the last decades, many efforts have been made to solve multimodal optimization problems using Particle Swarm Optimization (PSO). To produce good results, these PSO algorithms need to specify some niching parameters to define the local neighborhood. In this paper, our motivation is to propose the novel neighborhood structures that remove undesirable niching parameters without sacrificing perf...

Journal: :Pattern Recognition 2015
Wei Zhang Youmei Zhang Lin Ma Jingwei Guan Shijie Gong

In this paper, multimodal learning for facial expression recognition (FER) is proposed. The multimodal learning method makes the first attempt to learn the joint representation by considering the texture and landmark modality of facial images, which are complementary with each other. In order to learn the representation of each modality and the correlation and interaction between different moda...

Journal: :Computer methods and programs in biomedicine 2007
Marco Viceconti Cinzia Zannoni Debora Testi Marco Petrone Stefano Perticoni Paolo Quadrani Fulvia Taddei Silvano Imboden Gordon Clapworthy

This paper describes a new application framework (OpenMAF) for rapid development of multimodal applications in computer-aided medicine. MAF applications are multimodal in data, in representation, and in interaction. The framework supports almost any type of biomedical data, including DICOM datasets, motion-capture recordings, or data from computer simulations (e.g. finite element modeling). The...

2007
Werner Kurschl Wolfgang Gottesheim Stefan Mitsch Rene Prokop Johannes Schönböck

Mobile broadband internet access and powerful mobile devices make interesting and novel communication applications possible (e.g., recently emerging VoIP applications). Additionally, speech recognition has matured to the point that companies can seriously consider its use. We developed a distributed framework that enables multimodal user interfaces with speech recognition (dictation and command...

2008
Sebastian Weber Yaser Ghanam Xin Wang Frank Maurer

This paper presents Agile Planner for Digital Tabletops (APDT) as a tool that facilitates agile planning meetings using large horizontal displays. Utilizing APDT on a reasonably sized digital tabletop allows collaborators to create, edit, move, rotate, toss and delete index cards just like they would do with paper artifacts. APDT provides a multimodal input system that supports gesture-, handwr...

Journal: :ICST Trans. Security Safety 2015
Wataru Noguchi Hiroyuki Iizuka Masahito Yamamoto

We propose an architecture of neural network that can learn and integrate sequential multimodal information using Long Short Term Memory. Our model consists of encoder and decoder LSTMs and multimodal autoencoder. For integrating sequential multimodal information, firstly, the encoder LSTM encodes a sequential input to a fixed range feature vector for each modality. Secondly, the multimodal aut...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه تبریز - دانشکده ادبیات و زبانهای خارجی 1392

هدف از انجام تحقیق .بر اساس یافته ها تاکنون میزان تاثیراین تکنیکها در مقایسه با سایر روشها براساس اطلاعات آماری و به صورت عددو رقم بررسی و نمایش داده نشده اند و به همین دلیل این رویکرد نتوانسته توجه اساتید و مربیان آموزش زبان را در کشورمان به خود جلب کند. از اینرودر این پژوهش بر آن شدیم تا میزان تاثیر تکنیکهای معرفی شده در این رویکرد را با انجام یک تحقیق آزمایشی بر روی سه گروه از دانشجویان برر...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید