Female Voice Recognition Using Artificial Neural Networks and MATLAB Voicebox Toolbox
Keywords:
Voice Feature, Mel Frequency Cepstral Coefficient, Artificial Neural Network, Voice Recognition, Speech Recognition,Abstract
Voice and speaker recognition performances are measured based on the accuracy, speed and robustness. These three key performance indicators are primarily dependent on voice feature extraction method and voice recognition algorithm used. This paper aims to discuss various researches in speech recognition that has yielded high accuracy rates of 95% and above. The extracted MFCCs from MATLAB Voicebox toolbox were used as inputs to the multilayer Artificial Neural Networks (ANN) for female voice recognition algorithm. This study explored the recognition performance of the neural networks using variable number of hidden neurons and layers, and determine the architecture that would provide the optimum performance in terms of high recognition rate. MATLAB simulation resulted to a training and testing recognition rate of 100.00% when using 3-hidden-layer neural network from speech samples of a single-speaker, and highest training recognition rate of 98.11% and testing recognition rate of 87.20% when using 4-hidden-layer neural network from speech samples of several speakers. When tested with homonyms, the best recognition rate was 75.00% from a 3-hidden-layer neural network trained from a single-speaker, and 81.91% from a 4- hidden-layer neural network trained from multiple speakers. The deviation in recognition rates were primarily attributed to the variations made in the number of input neurons, hidden layers, and neurons of the speech recognition neural network.Downloads
Published
How to Cite
Issue
Section
License
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.