NU-ResNet: Deep Residual Networks for Thai Food Image Recognition
Keywords:
Deep Learning, Food Recognition, Convolutional Neural Network, Residual Networks, Smartphone, Thai Food,Abstract
To improve the recognition accuracy of a convolutional neural network, the number of the modules inside the network is normally increased so that the whole network becomes a deeper network. By doing such, it does not always guarantee that the accuracy will be improved. In addition, adding more modules to the network, the required parameter size and processing time are certainly increased. These then result in a significant drawback if such network is utilized in a smartphone in which the computational resources are limited. In this paper, another technique called Identity mapping, which is from the Residual networks, is adopted and added to the network. This technique is applied to the Deep NU-InNet with a depth of 4, 8, and 12 in order to increase the recognition accuracy while the depth is kept constant. Testing this proposed network; that is, NU-ResNet, with THFOOD-50 dataset, which contains various images of 50 Thai famous dishes, the improvement in terms of the recognition accuracy is obtained. With a depth of 4 for NU-ResNet, the achieved Top-1 accuracy and Top-5 accuracy are 83.07% and 97.04%, respectively. The parameter size of the network is only 1.48×106, which is quite small for being used with a smartphone application. Moreover, the average processing time per image is 44.60 ms, which can be practically used in an image recognition application. These results show a promising performance of the proposed network to be used with a Thai food image recognition application in a smartphone.Downloads
Published
How to Cite
Issue
Section
License
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.