نتایج جستجو برای: audio visually oriented instruction

تعداد نتایج: 262785  

2012
Anh-Phuong Ta Mathieu Ben Guillaume Gravier

Can we discover audio-visually consistent events from videos in a totally unsupervised manner? And, how to mine videos with different genres? In this paper we present our new results in automatically discovering audio-visual events. A new measure is proposed to select audio-visually consistent elements from the two dendrograms respectively representing hierarchical clustering results for the au...

2011
Nazlena Mohamad Ali Hyowon Lee Alan F. Smeaton

Automatic media content analysis in multimedia is a very promising field of research bringing in various possibilities for enhancing visual informatics. By computationally analysing the quantitative data contained in text, audio, image and video media, more semantically meaningful and useful information on the media contents can be derived, extracted and visualised, informing human users those ...

Ali Derakhshesh, Sasan Baleghizadeh,

Many studies have examined the effect of different approaches to teaching grammar including explicit and implicit instruction. However, research in this area is limited in a number of respects. One such limitation pertains to the issue of construct validity of the measures, i.e. the knowledge developed through implicit instruction has been measured through instruments which favor th...

Journal: :International Journal of Engineering & Technology 2018

2017
Michael Mulshine Jeff Snyder

This paper introduces an audio synthesis library written in C with “object oriented” programming principles in mind. We call it OOPS: Object-Oriented Programming for Sound, or, “Oops, it’s not quite Object-Oriented Programming in C.” The library consists of several UGens (audio components) and a framework to manage these components. The design emphases of the library are efficiency and organiza...

2015
Shigueo Nomura Takayuki Shiose Hiroshi Kawakami Osamu Katai

abstract We developed a concept of interfaces using nonspeech audio for building wearable devices to support visually impaired persons. The main purpose is to enable visually impaired persons to freely conceptualize spatial information by nonspeech audio without requiring conventional means, such as artificial pattern recognition and voice synthesizer systems. Subjects participated in experimen...

Journal: :Neurocase 2014
Lisa Ampe Ning Ma Nicole Van Hoeck Marie Vandekerckhove Frank Van Overwalle

Past fMRI research has demonstrated that to understand other people's behavior shown visually, the mirror network is strongly involved. However, the mentalizing network is also recruited when a visually presented action is unusual and/or when perceivers think explicitly about the intention. To further explore the conditions that trigger mentalizing activity, we replicated one of such studies (d...

2016
Gabriel Sargent Gabriel Barbosa de Fonseca Izabela Lyon Freire Ronan Sicre Zenilton Kleber Gonçalves do Patrocínio Silvio Jamil Ferzoli Guimarães Guillaume Gravier

This paper describes the systems developed by PUC Minas and IRISA for the person discovery task at MediaEval 2016. We adopt a graph-based representation and investigate two tag-propagation approaches to associate overlays cooccurring with some speaking faces to other visually or audio-visually similar speaking faces. Given a video, we first build a graph from the detected speaking faces (nodes)...

2007
Philip Strain

A requirements capture carried out with thirty blind and visually impaired participants has outlined many issues visually impaired people face when accessing the Web using current assistive technology. One key finding was that spatial information is not conveyed to users. An assistive multimodal interface has been developed that conveys spatial information to users via speech, audio and haptics...

2010
Ying Ying Huang

In this paper, an experimental study is presented on navigation in a 3D virtual environment by blind and visually impaired people with haptic and audio interaction. A simple 3D labyrinth is developed with haptic and audio interfaces to allow blind and visually impaired persons to access a three-dimensional Virtual Reality scene through senses of touch and hearing. The user must move from inside...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید