A Framework of Adaptive Multimodal Input for Location-Based Augmented Reality Application

Authors

  • Rimaniza Zainal Abidin Centre of Artificial Intelligence Technology, Faculty of Information Science and Technology, University Kebangsaan Malaysia, 43600 UKM Bangi, Selangor
  • Haslina Arshad Centre of Artificial Intelligence Technology, Faculty of Information Science and Technology, University Kebangsaan Malaysia, 43600 UKM Bangi, Selangor
  • Saidatul A’isyah Ahmad Shukri Centre of Artificial Intelligence Technology, Faculty of Information Science and Technology, University Kebangsaan Malaysia, 43600 UKM Bangi, Selangor

Keywords:

Adaptive Interfaces, Mobile Augmented Reality, Multimodal Interfaces, Mobile Sensors,

Abstract

Location-based AR is one of the most familiar mobile application currently being used. The position of the user relative to the real world will be located and digital information can be overlaid to provide information on the user’s current location and surroundings. Four main types of mobile augmented reality interfaces have been studied and one of them is a multimodal interface. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture and gaze) in a coordinated manner with multimedia system output. In the multimodal interface, many frameworks have been proposed to guide the designer to develop multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal input in mobile augmented reality application. This paper presents the conceptual framework to illustrate the adaptive multimodal interface for location-based augmented reality application. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and location-based augmented reality. We analyzed the components in the previous frameworks and measure which input modalities can be applied in mobile devices. Our framework can be used as a guide for designers and developers to develop a location-based AR application with an adaptive multimodal interaction.

Downloads

Published

2017-09-15

How to Cite

Zainal Abidin, R., Arshad, H., & Ahmad Shukri, S. A. (2017). A Framework of Adaptive Multimodal Input for Location-Based Augmented Reality Application. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 9(2-11), 97–103. Retrieved from https://jtec.utem.edu.my/jtec/article/view/2745