A Comparative Study of Deep Learning Parameters for Arcus Senilis Classification
Keywords:Arcus Senilis, Convolutional Neural Network, Deep Learning, Hyperparameters, Non-invasive
Deep learning technique has recently yielded positive results that have increased productivity system for artificial intelligent task, especially the digital image processing and advance machine vision. The popularity of deep learning technique has a major impact on solving complex problems in many fields, particularly the medical science owing to its applications in medical imaging, disease diagnosis, and much more. However, successful application of deep learning depends upon the appropriate setting of the parameters to achieve better result. Therefore, this paper presents a comparative analysis of different base learning rate and batch size configurations for arcus senilis (AS) classification using deep learning techniques. In this analytical study, a dataset of 402 eye images comprising 158 normal and 244 abnormal eye images was employed. Two well-known ResNet-50, VGG-19 and pre-trained convolutional neural network (CNN) models have been trained and validated with 10-fold cross validation using the proposed dataset. Furthermore, base learning rate and batch size were adjusted accordingly to determine the optimal convergence of each model by observing the validation accuracy and error. Experimental result shows that the best combined system has achieved an overall accuracy of 99.78% with a base learning rate of 0.0001 and a batch size of 20 on CNN pre-trained model validation set. Moreover, CNN produces the best result on F1-score and standard deviation of 99.77% and 0.464 respectively. Thus, it can be concluded that CNN requires a considerably smaller number of parameters and reasonable computing time to achieve state-of-the-art performances. This study shows that CNN has the tendency to consistently improve inaccuracy with growing number of epochs, with no signs of overfitting and performance.
How to Cite
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.