The SCEAS System
Navigation Menu

Conferences in DBLP

Int. Conf. on Multimodal Interfaces (ICMI) (icmi)
2004 (conf/icmi/2004)

  1. Yoshinori Kuno, Arihiro Sakurai, Dai Miyauchi, Akio Nakamura
    Two-way eye contact between humans and robots. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:1-8 [Conf]
  2. Randy Stein, Susan Brennan
    Another person's eye gaze as a cue in solving programming problems. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:9-15 [Conf]
  3. Takehiko Ohno
    EyePrint: support of document browsing with eye gaze trace. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:16-23 [Conf]
  4. Dominic W. Massaro
    A framework for evaluating multimodal integration by humans and a role for embodied conversational agents. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:24-31 [Conf]
  5. Louis-Philippe Morency, Trevor Darrell
    From conversational tooltips to grounded discourse: head poseTracking in interactive dialog systems. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:32-37 [Conf]
  6. Niels Ole Bernsen, Laila Dybkjær
    Evaluation of spoken multimodal conversation. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:38-45 [Conf]
  7. Matthew Turk, Jeremy N. Bailenson, Andrew C. Beall, Jim Blascovich, Rosanna Guadagno
    Multimodal transformed social interaction. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:46-52 [Conf]
  8. Gunther Heidemann, Ingo Bax, Holger Bekel
    Multimodal interaction in an augmented reality scenario. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:53-60 [Conf]
  9. Paulo Barthelmess, Clarence A. Ellis
    The ThreadMill architecture for stream-oriented human communication analysis applications. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:61-68 [Conf]
  10. Andrew D. Wilson
    TouchLight: an imaging touch screen and display for gesture-based interaction. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:69-76 [Conf]
  11. Laroussi Bouguila, Florian Evéquoz, Michèle Courant, Béat Hirsbrunner
    Walking-pad: a step-in-place locomotion interface for virtual environments. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:77-81 [Conf]
  12. Datong Chen, Robert Malkin, Jie Yang
    Multimodal detection of human interaction events in a nursing home environment. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:82-89 [Conf]
  13. Joshua Juster, Deb Roy
    Elvis: situated speech and gesture understanding for a robotic chandelier. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:90-96 [Conf]
  14. Stefan Kopp, Paul Tepper, Justine Cassell
    Towards integrated microplanning of language and iconic gesture for multimodal output. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:97-104 [Conf]
  15. Sanshzar Kettebekov
    Exploiting prosodic structuring of coverbal gesticulation. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:105-112 [Conf]
  16. Jacob Eisenstein, Randall Davis
    Visual and linguistic information in gesture classification. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:113-120 [Conf]
  17. Mary P. Harper, Elizabeth Shriberg
    Multimodal model integration for sentence unit detection. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:121-128 [Conf]
  18. Sharon L. Oviatt, Rachel Coulston, Rebecca Lunsford
    When do we interact multimodally?: cognitive load and multimodal communication patterns. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:129-136 [Conf]
  19. Zhihong Zeng, Jilin Tu, Ming Liu, Tong Zhang, Nicholas Rizzolo, ZhenQiu Zhang, Thomas S. Huang, Dan Roth, Stephen E. Levinson
    Bimodal HCI-related affect recognition. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:137-143 [Conf]
  20. Michael Katzenmaier, Rainer Stiefelhagen, Tanja Schultz
    Identifying the addressee in human-human-robot interactions based on head pose and speech. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:144-151 [Conf]
  21. Kate Saenko, Trevor Darrell, James R. Glass
    Articulatory features for robust visual speech recognition. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:152-158 [Conf]
  22. Sébastien Grange, Terrence Fong, Charles Baur
    M/ORIS: a medical/operating room interaction system. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:159-166 [Conf]
  23. André D. Milota
    Modality fusion for graphic design applications. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:167-174 [Conf]
  24. Hartwig Holzapfel, Kai Nickel, Rainer Stiefelhagen
    Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:175-182 [Conf]
  25. Adam Bodnar, Richard Corbett, Dmitry Nekrasovski
    AROMA: ambient awareness through olfaction in a messaging application. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:183-190 [Conf]
  26. Robert L. Williams II, Mayank Srivastava, John N. Howell, Robert R. Conatser Jr., David C. Eland, Janet M. Burns, Anthony G. Chila
    The virtual haptic back for palpatory training. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:191-197 [Conf]
  27. Liang-Guo Zhang, Yiqiang Chen, Gaolin Fang, Xilin Chen, Wen Gao
    A vision-based sign language recognition system using tied-mixture density HMM. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:198-204 [Conf]
  28. Carlos Busso, Zhigang Deng, Serdar Yildirim, Murtaza Bulut, Chul Min Lee, Abe Kazemzadeh, Sungbok Lee, Ulrich Neumann, Shrikanth Narayanan
    Analysis of emotion recognition using facial expressions, speech and multimodal information. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:205-211 [Conf]
  29. Pierre Dragicevic, Jean-Daniel Fekete
    Support for input adaptability in the ICON toolkit. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:212-219 [Conf]
  30. Myra P. van Esch-Bussemakers, Anita H. M. Cremers
    User walkthrough of multimodal access to multidimensional databases. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:220-226 [Conf]
  31. Sanjeev Kumar, Philip R. Cohen, Rachel Coulston
    Multimodal interaction under exerted conditions in a natural field setting. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:227-234 [Conf]
  32. Timothy J. Hazen, Kate Saenko, Chia-Hao La, James R. Glass
    A segment-based audio-visual speech recognizer: data collection, development, and initial experiments. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:235-242 [Conf]
  33. Rémi Bastide, David Navarre, Philippe A. Palanque, Amélie Schyn, Pierre Dragicevic
    A model-based approach for real-time embedded multimodal systems in military aircrafts. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:243-250 [Conf]
  34. Jullien Bouchet, Laurence Nigay, Thierry Ganille
    ICARE software components for rapidly developing multimodal interfaces. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:251-258 [Conf]
  35. R. Travis Rose, Francis K. H. Quek, Yang Shi
    MacVisSTA: a system for multimodal analysis. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:259-264 [Conf]
  36. Norbert Pfleger
    Context based multimodal fusion. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:265-272 [Conf]
  37. Jianhua Tao, Tieniu Tan
    Emotional Chinese talking head system. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:273-280 [Conf]
  38. Saija Patomäki, Roope Raisamo, Jouni Salo, Virpi Pasto, Arto Hippula
    Experiences on haptic interfaces for visually impaired young children. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:281-288 [Conf]
  39. Shahzad Malik, Joseph Laszlo
    Visual touchpad: a two-handed gestural input device. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:289-296 [Conf]
  40. Curry I. Guinn, Robert C. Hubal
    An evaluation of virtual human technology in informational kiosks. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:297-302 [Conf]
  41. Brian Goldiez, Glenn A. Martin, Jason Daly, Donald Washburn, Todd Lazarus
    Software infrastructure for multi-modal virtual environments. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:303-308 [Conf]
  42. Anmol Madan, Ron Caneel, Alex Pentland
    GroupMedia: distributed multi-modal interfaces. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:309-316 [Conf]
  43. Eric R. Hamilton
    Agent and library augmented shared knowledge areas (ALASKA). [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:317-318 [Conf]
  44. Songsak Channarukul, Susan Weber McRoy, Syed S. Ali
    MULTIFACE: multimodal content adaptations for heterogeneous devices. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:319-320 [Conf]
  45. Joseph M. Dalton, Ali Ahmad, Kay M. Stanney
    Command and control resource performance predictor(C2RP2). [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:321-322 [Conf]
  46. Luca Nardelli, Marco Orlandi, Daniele Falavigna
    A multi-modal architecture for cellular phones. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:323-324 [Conf]
  47. Matthias Merdes, Jochen Häußler, Matthias Jöst
    'SlidingMap': introducing and evaluating a new modality for map interaction. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:325-326 [Conf]
  48. Levent Bolelli, Guoray Cai, Hongmei Wang, Bita Mortazavi, Ingmar Rauschert, Sven Fuhrmann, Rajeev Sharma, Alan M. MacEachren
    Multimodal interaction for distributed collaboration. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:327-328 [Conf]
  49. Edward C. Kaiser, David Demirdjian, Alexander Gruenstein, Xiaoguang Li, John Niekrasz, Matt Wesson, Sanjeev Kumar
    A multimodal learning interface for sketch, speak and point creation of a schedule chart. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:329-330 [Conf]
  50. David Demirdjian, Kevin Wilson, Michael Siracusa, Trevor Darrell
    Real-time audio-visual tracking for meeting analysis. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:331-332 [Conf]
  51. Ashutosh Morde, Jun Hou, S. Kicha Ganapathy, Carlos D. Correa, Allan Meng Krebs, Lawrence Rabiner
    Collaboration in parallel worlds. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:333-334 [Conf]
  52. Paul E. Rybski, Satanjeev Banerjee, Fernando De la Torre, Carlos Vallespí, Alexander I. Rudnicky, Manuela M. Veloso
    Segmentation and classification of meetings using multiple information streams. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:335-336 [Conf]
  53. Péter Pál Boda
    A maximum entropy based approach for multimodal integration. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:337-338 [Conf]
  54. Pyush Agrawal, Ingmar Rauschert, Keerati Inochanon, Levent Bolelli, Sven Fuhrmann, Isaac Brewer, Guoray Cai, Alan M. MacEachren, Rajeev Sharma
    Multimodal interface platform for geographical information systems (GeoMIP) in crisis management. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:339-340 [Conf]
  55. Songsak Channarukul
    Adaptations of multimodal content in dialog systems targeting heterogeneous devices. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:341- [Conf]
  56. Lei Chen
    Utilizing gestures to better understand dynamic structure of human communication. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:342- [Conf]
  57. Dale-Marie Wilson
    Multimodal programming for dyslexic students. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:343- [Conf]
  58. Jacob Eisenstein
    Gestural cues for speech understanding. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:344- [Conf]
  59. Rajesh Chandrasekaran
    Using language structure for adaptive multimodal language acquisition. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:345- [Conf]
  60. Rebecca Lunsford
    Private speech during multimodal human-computer interaction. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:346- [Conf]
  61. Emily Bennett
    Projection augmented models: the effect of haptic feedback on subjective and objective human factors. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:347- [Conf]
  62. Agnes Lisowska
    Multimodal interface design for multimodal meeting content retrieval. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:348- [Conf]
  63. Leah Reeves
    Determining efficient multimodal information-interaction spaces for C2 systems. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:349- [Conf]
  64. Cristy Ho
    Using spatial warning signals to capture a driver's visual attention. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:350- [Conf]
  65. Saija Patomäki
    Multimodal interfaces and applications for visually impaired children. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:351- [Conf]
  66. Feng Jiang, Hongxun Yao, Guilin Yao
    Multilayer architecture in sign language recognition system. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:352-353 [Conf]
  67. Erno Mäkinen
    Computer vision techniques and applications in human-computer interaction. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:354- [Conf]
  68. Levent Bolelli
    Multimodal response generation in GIS. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:355- [Conf]
  69. Ingmar Rauschert
    Adaptive multimodal recognition of voluntary and involuntary gestures of people with motor disabilities. [Citation Graph (0, 0)][DBLP]
    ICMI, 2004, pp:356- [Conf]
NOTICE1
System may not be available sometimes or not working properly, since it is still in development with continuous upgrades
NOTICE2
The rankings that are presented on this page should NOT be considered as formal since the citation info is incomplete in DBLP
 
System created by asidirop@csd.auth.gr [http://users.auth.gr/~asidirop/] © 2002
for Data Engineering Laboratory, Department of Informatics, Aristotle University © 2002