Performance of the Vocal Source Related Features from the Linear Prediction Residual Signal in Speech Emotion Recognition
Keywords:Linear Prediction Analysis, Speech Emotion Recognition, Vocal Source Features, Vocal Tract Features,
AbstractResearchers concerned with Speech Emotion Recognition have proposed various useful features associated with their performance analysis related to emotions. However, a majority of the studies rely on acoustic features, characterized by vocal tract responses. The usefulness of vocal source related features has not been extensively explored, even though they are expected to convey useful emotion-related information. In this research, we study the significance of vocal source related features in Speech Emotion Recognition and assess the comparative performance of vocal source related features and vocal tract related features in emotion identification. The vocal source related features are extracted from the Linear Prediction residuals. The study shows that the vocal source related features contain emotion discriminant information and integrating them with vocal tract related features leads to performance improvement in emotion recognition rate.
B. Pascal, F. Shirley and B. Catherine, “Thinking the voice: neural correlates of voice perception”, 2004 TRENDS in Cognitive Sciences, Vol.8 No.3 March 2004.
E. Anna, M.E. Antonietta and V. Carl, "Needs and challenges in human-computer interaction for processing social-emotional information", Pattern Recognition Letters, Vol 66, pp. 42-51, 2015.
M. D. Zbancioc and M. Feraru, “Using the Lyapunov Exponent from Cepstral Coefficients for Automatic Emotion Recognition", 2014 International Conference and Exposition on Electrical and Power Engineering (EPE 2014), Iasi, Romania 16-18 October 2014.
Z. Xiao, E. Dellandrea, W. Dou and L. Chen, “Hierarchical Classification of Emotional Speech”, IEEE Transactions on Multimedia, 2007.
S. R. Kadiri, P. Gangamohan, S. V. Gangashetty and B. Yegnanarayana, “Analysis of Excitation Source Features of Speech for Emotion Recognition”, INTERSPEECH 2015 (ISCA), Dresden, Germany, September 6-10, 2015.
L. R. Rabiner and B. H. Juang, 1993. Fundamentals of Speech Recognition. Prentice-Hall, Englewood Cliffs, NJ, 1993.
T. Drugman, P. Alku, A. Alwan and B. Yegnanarayana, “Glottal Source Processing: from Analysis to Applications”, Computer Speech and Language, March 11, 2014.
T. Drugman, Y. Stylianou, L. Chen, X. Chen and M. J. F. Gales, “Robust Excitation-Based Features For Automatic Speech Recognition”, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4664 – 4668, April 2015.
D. Pati and S. R. M. Prasanna, “Subsegmental, segmental and suprasegmental processing of linear prediction residual for speaker information”, Int J Speech Technol (2011), Dec 2010.
T. Drugman, Y. Stylianou, Y. Kida and M. Akamine, “Voice Activity Detection: Merging Source and Filter-based Information”, IEEE Signal Processing Letters ( Volume: 23, Issue: 2, Feb. 2016 ), pp. 252 – 256, Feb 2016.
C. Hanilci, and F. Ertas, “Impact of Voice Excitation Features on Speaker Verification”, ELECO 2011 7th International Conference on Electrical and Electronics Engineering, Bursa, Turkey, December 2011.
B. Yegnanarayana, S. R.M.Prasanna and K. S. Rao, “Speech enhancement using excitation source information”, Acoustics, Speech, and Signal Processing (ICASSP), 2002, pp. 541-544 May 2001.
K. S. Rao, and S. G. Koolagudi, "Characterization and recognition of emotions from speech using excitation source information", 2012, International Journal of speech technology, 2013 – Springer, Volume 16, Issue 2, pp. 181-201, June 2013.
14. J. Nurminen, H. Silén, E. Helander and M. Gabbouj, “Evaluation of Detailed Modeling of the LP Residual in Statistical Speech Synthesis”, 2013 IEEE International Symposium on Circuits and Systems (ISCAS2013), pp. 313 – 316, May 2013
P. Gangamohan, S. R. Kadiri, S. V. Gangashetty and B. Yegnanarayana, “Excitation Source Features for Discrimination of Anger and Happy Emotions”, INTERSPEECH 2014, ISCA, Singapore, September 2014.
L. Mary, "Multilevel implicit features for language and speaker recognition", Ph.D. thesis, Dept. of Computer Science and Engineering, Indian Institute of Technology, Madras, Chennai, India, June 2006.
S. G. Koolagudi and K. S. Rao, “Emotion recognition from speech using source, system and prosodic features”, International Journal of Speech Technology, 15(2), pp. 265–289, 2012.
A. Al-Talabani, H. Sellahewa and S. Jassim, "Excitation Source and Low-Level Descriptor Feature Fusion for Emotion Recognition using SVM and ANN", 5th Computer Science and Electronic Engineering Conference, 2013.
Marcos Faundez-Zanuy, “On the Usefulness of Linear and Nonlinear Prediction Residual Signals for Speaker Recognition”, Advances in Nonlinear Speech Processing, vol. 4885, pp. 95-104. 2008.
S. R. M. Prasanna, C. S. Gupta and B. Yegnanarayana, “Extraction of speaker-specific excitation information from linear prediction residual of speech”. Speech Communication, vol. 48, pp.1243-1261. 2006.
T. Seehapoch and S. Wongthanavasu, “Speech Emotion Recognition Using Support Vector Machines”, 2013 5th International Conference on Knowledge and Smart Technology (KST), 2013.
V. B. Kobayashi and V. B. Calag, “Detection of affective states from speech signals using ensembles of classifiers”, Intelligent Signal Processing Conference 2013 (ISP 2013), IET, Dec 2013.
P. Vasuki, “Speech Emotion Recognition Using Adaptive Ensemble of Class Specific Classifiers”, Research Journal of Applied Sciences, Engineering and Technology 9(12), pp.1105-1114, 2015.
J. Rybka and A. Janicki, “Comparison of speaker dependent and speaker independent emotion recognition”, Int. J. Appl. Math. Comput. Sci., 2013, vol. 23, No. 4, pp.797–808.
F. Burkhardt, A. Paeschke, M. Rolfes, W. F. Sendlmeier, and B. Weiss, “A Database of German Emotional Speech,” in INTERSPEECH 2005 - Eurospeech, 9th European Conference on Speech Communication and Technology, Lisbon, Portugal, September 4-8, 2005. ISCA, 2005, pp. 1517–1520.
Behrouz Ahmadi-Nedushan, "An optimized instance-based learning algorithm for estimation of compressive strength of concrete", Engineering Applications of Artificial Intelligence, August 2012.
P. Shen, Z. Changjun, X. Chen, “Automatic Speech Emotion Recognition Using Support Vector Machine”, International Conference on Electronic & Mechanical Engineering and Information Technology, vol. 2, pp. 621 – 625, August 2011.
How to Cite
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.