A Framework of Adaptive Multimodal Input for Location-Based Augmented Reality Application
Keywords:Adaptive Interfaces, Mobile Augmented Reality, Multimodal Interfaces, Mobile Sensors,
AbstractLocation-based AR is one of the most familiar mobile application currently being used. The position of the user relative to the real world will be located and digital information can be overlaid to provide information on the user’s current location and surroundings. Four main types of mobile augmented reality interfaces have been studied and one of them is a multimodal interface. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture and gaze) in a coordinated manner with multimedia system output. In the multimodal interface, many frameworks have been proposed to guide the designer to develop multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal input in mobile augmented reality application. This paper presents the conceptual framework to illustrate the adaptive multimodal interface for location-based augmented reality application. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and location-based augmented reality. We analyzed the components in the previous frameworks and measure which input modalities can be applied in mobile devices. Our framework can be used as a guide for designers and developers to develop a location-based AR application with an adaptive multimodal interaction.
Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B, “Recent Advances in Augmented Reality.” IEEE Comput Graph Appl. 2001;21(6):34-47
Carmigniani J, Furht B, Anisetti M, Ceravolo P, Damiani E, Ivkovic M, “Augmented reality technologies, systems and applications”. Multimedia Tools Appl. 2011;51(1):341-77.
Gaved M, FitzGerald E, Ferguson R, Adams A, Mor Y, Thomas R. “Augmented Reality and Mobile Learning: The State of the Art”. Int J Mob Blended Learn. 2013;5(4):43-58.
Jamali S, Shiratuddin MF, “Wong K, “An overview of mobile-augmented reality in higher education. International Journal on Recent Trends In Engineering & Technology”. 2014;11(2):229.
Oviatt S, “Advances in Robust Multimodal Interface Design”. IEEE Comput Graph Appl. 2003;23(5):62-8.
Kong J, Zhang WY, Yu N, Xia XJ, “Design of human-centric adaptive multimodal interfaces”. International Journal of Human-Computer Studies. 2011;69(12):854-69.
Duarte C, Lu, #237, Carri s, #231. “A conceptual framework for developing adaptive multimodal applications”. Proceedings of the 11th international conference on Intelligent user interfaces; Sydney, Australia. 1111481: ACM; 2006. p. 132-9.
Lee M, Billinghurst M, Baek W, Green R, Woo W, “A usability study of multimodal input in an augmented reality environment”. Virtual Reality. 2013;17(4):293-305..
Bolt RA, “Put-that-there”. Voice and gesture at the graphics interface. SIGGRAPH Comput Graph. 1980;14(3):262-70.
Oviatt S, “Ten myths of multimodal interaction”. Commun ACM. 1999;42(11):74-81
Moller A. Diewald S. Roalter L. Kranz M, “Supporting Mobile Multimodal Interaction with a Rule-Based Framework”. 2014
S. Maria. Master thesis, Vrije University Brussel, Belgium, “Mobile Multimodal Interaction: An Investigation and Implementation of Context-dependent Adaptation”. 2012
Rothrock L, Koubek R, Fuchs F, Haas M, Salvendy G, “Review and reappraisal of adaptive interfaces: Toward biologically inspired paradigms”. Theoretical Issues in Ergonomics Science. 2002;3(1):47-84.
Wesson JL, Singh A, van Tonder B, “Can Adaptive Interfaces Improve the Usability of Mobile Applications?” In: Forbrig P, Paternó F, Mark Pejtersen A, editors. “Human-Computer Interaction” Second IFIP TC 13 Symposium, HCIS 2010, Held as Part of WCC 2010, Brisbane, Australia, September 20-23, 2010 Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg; 2010. p. 187-98..
Dumas B, Lalanne D, Oviatt S, “Multimodal Interfaces: A Survey of Principles, Models and Frameworks”. In: Lalanne D, Kohlas J, editors. Human Machine Interaction: Research Results of the MMI Program. Berlin, Heidelberg: Springer Berlin Heidelberg; 2009. p. 3-26
Calvary G, Coutaz, J, Thevenin, D, Limbourg, Q, Bouillon, L, and Vanderdonckt, J. “A Unifying Reference Framework for Multi-Target User Interfaces”. Interacting with Computers 15, 3 (2003), 289–308.
Carmigniani J, Furht B, Anisetti M, Ceravolo P, Damiani E, Ivkovic M. “Augmented reality technologies, systems and applications”. Multimedia Tools and Applications. 2011;51(1):341-77.
Grubert J. Grasset R. “Augmented Reality for Android Application Development”, PACKT Publishing, 2013.
Geiger P, Schickler M, Pryss R, Schobel J, Reichert M. “Location-based mobile augmented reality applications: Challenges, examples, lessons learned”. 2014.
Irshad, Shafaq, and Dayang Rohaya Awang Rambli. “Multi-Layered Mobile Augmented Reality Framework for Positive User Experience.” In Proceedings of the 2nd International Conference in HCI and UX Indonesia 2016, 21-26. Jakarta, Indonesia: ACM, 2016.
Lee, Gun A., and Mark Billinghurst. “A Component Based Framework for Mobile Outdoor Ar Applications.” In Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, 207-210. Hong Kong, Hong Kong: ACM, 2013.
Ortman, Erik, and Kenneth Swedlund. “Guidelines for User Interactions in Mobile Augmented Reality.” 2012.
How to Cite
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.