Accuracy Improvement of MFCC Based Speech Recognition by Preventing DFT Leakage Using Pitch Segmentation

Authors

  • Sopon Wiriyarattanakul Department of Computer Engineering, Faculty of Engineering, Khon Kaen University, Khon Kaen 40002, Thailand.
  • Nawapak Eua-anant Department of Computer Engineering, Faculty of Engineering, Khon Kaen University, Khon Kaen 40002, Thailand.

Keywords:

Short-time Energy Waveform (SEW), Pitch Segmentation, Spectral Leakage, Mel-Frequency Cepstral Coefficients (MFCC),

Abstract

Most MFCC based speech recognition algorithms employ frame segmentation to divide a signal into fixed-size frames as the first step prior to MFCC feature extraction. Commonly used fixed frame sizes, around 20-40 ms, do not usually fit into complete periods of speech signals. Consequently, in MFCC feature extraction, spectral leakage arises after Discrete Fourier Transform is applied to these fixed-size intervals resulting in smeared spectra and reduced speech recognition performance. In this paper, a pitch-based speech signal segmentation to reduce spectral leakage is proposed by utilizing a new technique of pitch detection based on Short-time Energy Waveform (SEW) to yield segmented speech intervals with complete periods. The proposed method utilizes local minima of SEW as markers for pitch segmentation. After segmenting speech signals into pitches, MFCC feature vectors are extracted and subsequently used as raw data for speech recognition using artificial neural networks. Speech recognition experiments using artificial neural networks, applied to collect Thai language speech signals from 40 speakers, were conducted. Empirical results indicate that speech recognition using speech signals segmented into pitches yields more accurate recognition results than those using speech signals segmented into a fixed frame.

References

Y. Zouhir and K. Ouni, “Feature Extraction Method for Improving Speech Recognition in Noisy Environments,” Journal of Computer Science, vol. 12, no. 2, pp. 56–61, Mar. 2016.

J. Chaloupka, P. Červa, J. Silovský, J. Žd’ánský, and J. Nouza, “Modification of the speech feature extraction module for the improvement of the system for automatic lectures transcription,” in Proceedings ELMAR-2012, 2012, pp. 223–226.

F. J. Harris, “On the use of windows for harmonic analysis with the discrete Fourier transform,” Proceedings of the IEEE, vol. 66, no. 1, pp. 51–83, Jan. 1978.

S. Shabani and Y. Norouzi, “Speech recognition using Principal Components Analysis and Neural Networks,” in 2016 IEEE 8th International Conference on Intelligent Systems (IS), 2016, pp. 90–95.

W. Hu, M. Fu, and W. Pan, “Primi speech recognition based on deep neural network,” in 2016 IEEE 8th International Conference on Intelligent Systems (IS), 2016, pp. 667–671.

D. Malewadi and G. Ghule, "Development of Speech recognition technique for Marathi numerals using MFCC & LFZI algorithm," 2016 International Conference on Computing Communication Control and automation (ICCUBEA), Pune, India, 2016, pp. 1-6.

T. Kinjo and K. Funaki, “On HMM Speech Recognition Based on Complex Speech Analysis,” in IECON 2006 - 32nd Annual Conference on IEEE Industrial Electronics, 2006, pp. 3477–3480.

L. Yuan, “An improved HMM speech recognition model,” in 2008 International Conference on Audio, Language and Image Processing, 2008, pp. 1311–1315.

A. Graves, A. r Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013, pp. 6645–6649.

X. Wang, J. Tian, and M. Wang, “Parameter influence on speech recognition rate of modified RBF neural network,” in 2010 International Conference on Intelligent Control and Information Processing, 2010, pp. 76–78.

K. F. Leung, F. H. F. Leung, H. K. Lam, and P. K. S. Tam, “Recognition of speech commands using a modified neural fuzzy network and an improved GA,” in The 12th IEEE International Conference on Fuzzy Systems, 2003. FUZZY ’03, 2003, vol. 1, pp. 190–195 vol.1.

A. Oppenheim, S. Willsky, and S. Nawab, Signals And Systems, second edition ed. PHI Learning, 2009.

M. Sahidullah and G. Saha, “A Novel Windowing Technique for Efficient Computation of MFCC for Speaker Recognition,” IEEE Signal Processing Letters, vol. 20, no. 2, pp. 149–152, Feb. 2013.

I. Reljin, B. Reljin, V. Papic, and P. Kostic, “New window functions generated by means of time convolution-spectral leakage error,” in Electrotechnical Conference, 1998. MELECON 98., 9th Mediterranean, 1998, vol. 2, pp. 878–881 vol.2.

Z. Qawaqneh, A. A. Mallouh, and B. D. Barkana, “Deep neural network framework and transformed MFCCs for speaker’s age and gender classification,” Knowledge-Based Systems, vol. 115, pp. 5–14, Jan. 2017.

G. Zhai, J. Chen, C. Li, and G. Wang, “Pattern recognition approach to identify loose particle material based on modified MFCC and HMMs,” Neurocomputing, vol. 155, pp. 135–145, May 2015.

P. Barua, K. Ahmad, A. A. S. Khan and M. Sanaullah, "Neural network based recognition of speech using MFCC features," 2014 International Conference on Informatics, Electronics & Vision (ICIEV), Dhaka, 2014, pp. 1-6.

K. Kolhatkar, M. Kolte and J. Lele, "Implementation of pitch detection algorithms for pathological voices," 2016 International Conference on Inventive Computation Technologies(ICICT), Coimbatore, 2016, pp. 1- 5.

A. de Cheveigneb and H. Kawahara, “Yin, a fundamental frequency ´ estimator for speech and music,” J. Acoust. Soc. Am (2002).

B. Kumaraswamy and P. G. Poonacha, “Improved pitch detection using fourier approximation method,” in 2015 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), 2015, pp. 522–527.

T. T. Swee, S. H. S. Salleh, and M. R. Jamaludin, “Speech pitch detection using short-time energy,” in 2010 International Conference on Computer and Communication Engineering (ICCCE), 2010, pp. 1– 6.

T. Ganchev, N. Fakotakis, and G. Kokkinakis, "Comparative evaluation of various MFCC implementations on the speaker verification task," in 10th International Conference on Speech and Computer (SPECOM 2005), Vol. 1, 2005, pp. 191–194.

V. Tyagi and C. Wellekens, On desensitizing the Mel-Cepstrum to spurious spectral components for Robust Speech Recognition, in Acoustics, Speech, and Signal Processing, Proceedings. (ICASSP ’05). IEEE International Conference on, vol. 1, 2005, pp. 529–

K. Gupta and D. Gupta, "An analysis on LPC, RASTA and MFCC techniques in Automatic Speech recognition system," 2016 6th International Conference - Cloud System and Big Data Engineering (Confluence), Noida, 2016, pp. 493-497.

Downloads

Published

2018-02-15

How to Cite

Wiriyarattanakul, S., & Eua-anant, N. (2018). Accuracy Improvement of MFCC Based Speech Recognition by Preventing DFT Leakage Using Pitch Segmentation. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 10(1-8), 173–179. Retrieved from https://jtec.utem.edu.my/jtec/article/view/3756