A Framework of Adaptive Multimodal Input for Location-Based Augmented Reality Application
Keywords:
Adaptive Interfaces, Mobile Augmented Reality, Multimodal Interfaces, Mobile Sensors,Abstract
Location-based AR is one of the most familiar mobile application currently being used. The position of the user relative to the real world will be located and digital information can be overlaid to provide information on the user’s current location and surroundings. Four main types of mobile augmented reality interfaces have been studied and one of them is a multimodal interface. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture and gaze) in a coordinated manner with multimedia system output. In the multimodal interface, many frameworks have been proposed to guide the designer to develop multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal input in mobile augmented reality application. This paper presents the conceptual framework to illustrate the adaptive multimodal interface for location-based augmented reality application. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and location-based augmented reality. We analyzed the components in the previous frameworks and measure which input modalities can be applied in mobile devices. Our framework can be used as a guide for designers and developers to develop a location-based AR application with an adaptive multimodal interaction.Downloads
Published
How to Cite
Issue
Section
License
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.