|
Conferences in DBLP
Natural interfaces in the field: the case of pen and paper. [Citation Graph (, )][DBLP]
Manipulating trigonometric expressions encodedthrough electro-tactile signals. [Citation Graph (, )][DBLP]
Multimodal system evaluation using modality efficiency and synergy metrics. [Citation Graph (, )][DBLP]
Effectiveness and usability of an online help agent embodied as a talking head. [Citation Graph (, )][DBLP]
Interaction techniques for the analysis of complex data on high-resolution displays. [Citation Graph (, )][DBLP]
Role recognition in multiparty recordings using social affiliation networks and discrete distributions. [Citation Graph (, )][DBLP]
Audiovisual laughter detection based on temporal features. [Citation Graph (, )][DBLP]
Predicting two facets of social verticality in meetings from five-minute time slices and nonverbal cues. [Citation Graph (, )][DBLP]
Multimodal recognition of personality traits in social interactions. [Citation Graph (, )][DBLP]
Social signals, their function, and automatic analysis: a survey. [Citation Graph (, )][DBLP]
VoiceLabel: using speech to label mobile sensor data. [Citation Graph (, )][DBLP]
The babbleTunes system: talk to your ipod! [Citation Graph (, )][DBLP]
Evaluating talking heads for smart home systems. [Citation Graph (, )][DBLP]
Perception of dynamic audiotactile feedback to gesture input. [Citation Graph (, )][DBLP]
An integrative recognition method for speech and gestures. [Citation Graph (, )][DBLP]
As go the feet...: on the estimation of attentional focus from stance. [Citation Graph (, )][DBLP]
Knowledge and data flow architecture for reference processing in multimodal dialog systems. [Citation Graph (, )][DBLP]
The CAVA corpus: synchronised stereoscopic and binaural datasets with head movements. [Citation Graph (, )][DBLP]
Towards a minimalist multimodal dialogue framework using recursive MVC pattern. [Citation Graph (, )][DBLP]
Explorative studies on multimodal interaction in a PDA- and desktop-based scenario. [Citation Graph (, )][DBLP]
Designing context-aware multimodal virtual environments. [Citation Graph (, )][DBLP]
A high-performance dual-wizard infrastructure for designing speech, pen, and multimodal interfaces. [Citation Graph (, )][DBLP]
The WAMI toolkit for developing, deploying, and evaluating web-accessible multimodal interfaces. [Citation Graph (, )][DBLP]
A three-dimensional characterization space of software components for rapidly developing multimodal interfaces. [Citation Graph (, )][DBLP]
Crossmodal congruence: the look, feel and sound of touchscreen widgets. [Citation Graph (, )][DBLP]
MultiML: a general purpose representation language for multimodal human utterances. [Citation Graph (, )][DBLP]
Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios. [Citation Graph (, )][DBLP]
Context-based recognition during human interactions: automatic feature selection and encoding dictionary. [Citation Graph (, )][DBLP]
AcceleSpell, a gestural interactive game to learn and practice finger spelling. [Citation Graph (, )][DBLP]
A multi-modal spoken dialog system for interactive TV. [Citation Graph (, )][DBLP]
Multimodal slideshow: demonstration of the openinterface interaction development environment. [Citation Graph (, )][DBLP]
A browser-based multimodal interaction system. [Citation Graph (, )][DBLP]
IGlasses: an automatic wearable speech supplementin face-to-face communication and classroom situations. [Citation Graph (, )][DBLP]
Innovative interfaces in MonAMI: the reminder. [Citation Graph (, )][DBLP]
PHANTOM prototype: exploring the potential for learning with multimodal features in dentistry. [Citation Graph (, )][DBLP]
Audiovisual 3d rendering as a tool for multimodal interfaces. [Citation Graph (, )][DBLP]
Multimodal presentation and browsing of music. [Citation Graph (, )][DBLP]
An audio-haptic interface based on auditory depth cues. [Citation Graph (, )][DBLP]
Detection and localization of 3d audio-visual objects using unsupervised clustering. [Citation Graph (, )][DBLP]
Robust gesture processing for multimodal interaction. [Citation Graph (, )][DBLP]
Investigating automatic dominance estimation in groups from visual attention and speaking activity. [Citation Graph (, )][DBLP]
Dynamic modality weighting for multi-stream hmms inaudio-visual speech recognition. [Citation Graph (, )][DBLP]
A Fitts Law comparison of eye tracking and manual input in the selection of visual targets. [Citation Graph (, )][DBLP]
A Wizard of Oz study for an AR multimodal interface. [Citation Graph (, )][DBLP]
A realtime multimodal system for analyzing group meetings by combining face pose tracking and speaker diarization. [Citation Graph (, )][DBLP]
Designing and evaluating multimodal interaction for mobile contexts. [Citation Graph (, )][DBLP]
Automated sip detection in naturally-evoked video. [Citation Graph (, )][DBLP]
Perception of low-amplitude haptic stimuli when biking. [Citation Graph (, )][DBLP]
TactiMote: a tactile remote control for navigating in long lists. [Citation Graph (, )][DBLP]
The DIRAC AWEAR audio-visual platform for detection of unexpected and incongruent events. [Citation Graph (, )][DBLP]
Smoothing human-robot speech interactions by using a blinking-light as subtle expression. [Citation Graph (, )][DBLP]
Feel-good touch: finding the most pleasant tactile feedback for a mobile touch screen button. [Citation Graph (, )][DBLP]
Embodied conversational agents for voice-biometric interfaces. [Citation Graph (, )][DBLP]
|