Pilot Study of Emotion Recognition through Facial Expression
Abstract
This paper presents our finding from a pilot study on human reaction through facial expression as well as brainwave changes when being induced by audio-visual stimuli while using the Emotiv Epoc equipment. We hypothesize that Emotiv Epoc capable to detect the emotion of the participants and the graphs would match with facial expression display. In this study, four healthy men were chosen and being induced with eight videos, six videos are predefined whereas the other two videos are personalized. We aim for identifying the optimum set up for the real experiment, to validate the capability of the Emotiv Epoc and to obtain spontaneous facial expression database. Thus, from the pilot study, the principal result shows that emotion is better if being induced by using personalized videos. Not only that, it also shows the brainwave produced by Emotiv Epoc is aligned with the facial expression especially for positive emotion cases. Hence, it is possible to obtain spontaneous database in the present of Emotiv Epoc.References
Verma G.K. and Tiwary U.S “Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals”. NeuroImage, 1022014. 2013. pp.162–172.
K. Takahashi, “Remarks on emotion recognition from multi-modal bio-potential signals”, IEEE International Conference on Industrial Technology, IEEE ICIT Vol 3. 2004. pp.95–100.
S. Kumar, T. K. Das and R. H. Laskar, “Significance of Acoustic Features for designing an Emotion Classification System”, International Conference on Electrical and Computer Engineering (ICECE), 2014. pp.128–131.
C. Houwei, A. Savran, R. Verma and A. Nenkova, “Acoustic and lexical representations for affect prediction in spontaneous conversations”, Computer Speech and Language, 29(1), 2015, pp.203–217.
Ten Bosch, L., “Emotions, speech and the ASR framework. Speech Communication”, 40, 2003, pp.213–225.
Lv. Guoyun, S. Hu and Lu. Xipan, “Speech Emotion Recognition Based on Dynamic Models” International Conference on Audio, Language and Image Processing (ICALIP), 2014, pp.480–484.
L. Diane, F. Kate and S. Scott., “Towards Emotion Prediction in Spoken Tutoring Dialogues” 2012, pp.52–54.
D. J. Litman and K. Forbes-Riley, “Predicting student emotions in computer-human tutoring dialogues”, Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics - ACL ’04, 2004, p.351.
J. Lin, A, C. Miao, Z. Shen, “A FCM based Approach for Emotion Prediction in Educational Game”, 7th International Conference on Computing and Convergence Technology (ICCCT), 2012, pp. 980–986.
S. Yecker, J. C. Borod, A. Brozgold, C. Martin, M. Alpert and J. Welkowitz, “Lateralization of Facial Emotional Expression in Schizophrenic and Depressed Patients” J Neuropsychiatry Clin Neurosci, 1999, pp.370–379.
Daniel. J. France, Richard G. Shiavi, S. Silverman, M. Silverman and D. Mitchell Wilkes, “Acoustical Properties of Speech as Indicators of Depression and Suicidal Risk”, IEEE Transaction on Biomedical Engineering, Vol.47(7), 2000, pp.829–837.
X. Huang, D. Chen, Y. Huang and X. Han., “Automatic Prediction of Trait Anxiety Degree Using Recognition Rates of Facial Emotions”, Sixth International Conference on Advanced Computational Intelligence (ICACI), 2013, pp.272–275.
K. Holewa and A. Nawrocka, “Emotiv EPOC neuroheadset in brain - computer interface”, Proceedings of the 2014 15th International Carpathian Control Conference (ICCC), 2014, pp.149–152.
J.Benas, (2014), “A Blind Father and His Daughter-Short Sad Story”, Retrieved from https://www.youtube.com/watch?v=SV6Sw9CNWek>.
Changing Batteries- The Saddest Story 3D Animation, (2013), Retrieved from https://www.youtube.com/watch?v=O_yVo3YofqQ>.
Twin Babies Fight Over Pacifier, (2013), Retrieved from https://www.youtube.com/watch?v=NfZC44C6aCU>
Emmerson-Mommy’s Nose is Scary (2011), Retrieved from https://www.youtube.com/watch?v=N9oxmRT2YWw>.W.-K. Chen.
Downloads
Published
How to Cite
Issue
Section
License
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.