Hands and speech in space: multimodal interaction with augmented reality interfaces

Date

2013

Authors

Billinghurst, M.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

Proceedings of the 15th ACM on International Conference on Multimodal Interaction (ICMI), 2013, pp.379-380

Statement of Responsibility

Conference Name

15th ACM International Conference on Multimodal Interaction (ICMI) (9 Dec 2013 - 13 Dec 2013 : Sydney, Australia)

Abstract

Augmented Reality (AR) is technology that allows virtual imagery to be seamlessly integrated into the real world. Although first developed in the 1960's it has only been recently that AR has become widely available, through platforms such as the web and mobile phones. However most AR interfaces have very simple interaction, such as using touch on phone screens or camera tracking from real images. New depth sensing and gesture tracking technologies such as Microsoft Kinect or Leap Motion have made is easier than ever before to track hands in space. Combined with speech recognition and AR tracking and viewing software it is possible to create interfaces that allow users to manipulate 3D graphics in space through a natural combination of speech and gesture. In this paper I will review previous research in multimodal AR interfaces and give an overview of the significant research questions that need to be addressed before speech and gesture interaction can become commonplace.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

Copyright 2009 Mark Billinghurst. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, and that copies bear this notice and the full citation on the first page.

License

Grant ID

Call number

Persistent link to this record