Pilot Study of Emotion Recognition through Facial Expression

Authors

  • Radin Puteri Hazimah Radin Monawir Center of Graduate Studies, Universiti Teknikal Malaysia Melaka, 76100 Durian Tunggal, Melaka, Malaysia.
  • Noraidah Blar Center of Graduate Studies, Universiti Teknikal Malaysia Melaka, 76100 Durian Tunggal, Melaka, Malaysia.
  • Fairul Azni Jafar Department of Robotics and Automation, Faculty of Manufacturing Engineering, Universiti Teknikal Malaysia Melaka, 76100 Durian Tunggal, Melaka, Malaysia.
  • Zulhasnizam Hasan Department of Electronics and Computer Engineering, Faculty of Engineering Technology, Universiti Teknikal Malaysia Melaka, 76100 Durian Tunggal, Melaka, Malaysia.

Abstract

This paper presents our finding from a pilot study on human reaction through facial expression as well as brainwave changes when being induced by audio-visual stimuli while using the Emotiv Epoc equipment. We hypothesize that Emotiv Epoc capable to detect the emotion of the participants and the graphs would match with facial expression display. In this study, four healthy men were chosen and being induced with eight videos, six videos are predefined whereas the other two videos are personalized. We aim for identifying the optimum set up for the real experiment, to validate the capability of the Emotiv Epoc and to obtain spontaneous facial expression database. Thus, from the pilot study, the principal result shows that emotion is better if being induced by using personalized videos. Not only that, it also shows the brainwave produced by Emotiv Epoc is aligned with the facial expression especially for positive emotion cases. Hence, it is possible to obtain spontaneous database in the present of Emotiv Epoc.

References

Verma G.K. and Tiwary U.S “Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals”. NeuroImage, 1022014. 2013. pp.162–172.

K. Takahashi, “Remarks on emotion recognition from multi-modal bio-potential signals”, IEEE International Conference on Industrial Technology, IEEE ICIT Vol 3. 2004. pp.95–100.

S. Kumar, T. K. Das and R. H. Laskar, “Significance of Acoustic Features for designing an Emotion Classification System”, International Conference on Electrical and Computer Engineering (ICECE), 2014. pp.128–131.

C. Houwei, A. Savran, R. Verma and A. Nenkova, “Acoustic and lexical representations for affect prediction in spontaneous conversations”, Computer Speech and Language, 29(1), 2015, pp.203–217.

Ten Bosch, L., “Emotions, speech and the ASR framework. Speech Communication”, 40, 2003, pp.213–225.

Lv. Guoyun, S. Hu and Lu. Xipan, “Speech Emotion Recognition Based on Dynamic Models” International Conference on Audio, Language and Image Processing (ICALIP), 2014, pp.480–484.

L. Diane, F. Kate and S. Scott., “Towards Emotion Prediction in Spoken Tutoring Dialogues” 2012, pp.52–54.

D. J. Litman and K. Forbes-Riley, “Predicting student emotions in computer-human tutoring dialogues”, Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics - ACL ’04, 2004, p.351.

J. Lin, A, C. Miao, Z. Shen, “A FCM based Approach for Emotion Prediction in Educational Game”, 7th International Conference on Computing and Convergence Technology (ICCCT), 2012, pp. 980–986.

S. Yecker, J. C. Borod, A. Brozgold, C. Martin, M. Alpert and J. Welkowitz, “Lateralization of Facial Emotional Expression in Schizophrenic and Depressed Patients” J Neuropsychiatry Clin Neurosci, 1999, pp.370–379.

Daniel. J. France, Richard G. Shiavi, S. Silverman, M. Silverman and D. Mitchell Wilkes, “Acoustical Properties of Speech as Indicators of Depression and Suicidal Risk”, IEEE Transaction on Biomedical Engineering, Vol.47(7), 2000, pp.829–837.

X. Huang, D. Chen, Y. Huang and X. Han., “Automatic Prediction of Trait Anxiety Degree Using Recognition Rates of Facial Emotions”, Sixth International Conference on Advanced Computational Intelligence (ICACI), 2013, pp.272–275.

K. Holewa and A. Nawrocka, “Emotiv EPOC neuroheadset in brain - computer interface”, Proceedings of the 2014 15th International Carpathian Control Conference (ICCC), 2014, pp.149–152.

J.Benas, (2014), “A Blind Father and His Daughter-Short Sad Story”, Retrieved from https://www.youtube.com/watch?v=SV6Sw9CNWek>.

Changing Batteries- The Saddest Story 3D Animation, (2013), Retrieved from https://www.youtube.com/watch?v=O_yVo3YofqQ>.

Twin Babies Fight Over Pacifier, (2013), Retrieved from https://www.youtube.com/watch?v=NfZC44C6aCU>

Emmerson-Mommy’s Nose is Scary (2011), Retrieved from https://www.youtube.com/watch?v=N9oxmRT2YWw>.W.-K. Chen.

Downloads

Published

2016-05-01

How to Cite

Radin Monawir, R. P. H., Blar, N., Jafar, F. A., & Hasan, Z. (2016). Pilot Study of Emotion Recognition through Facial Expression. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 8(2), 17–21. Retrieved from https://jtec.utem.edu.my/jtec/article/view/939