Palm Oil Fresh Fruit Bunch Ripeness Grading Recognition Using Convolutional Neural Network
Keywords:
AlexNet, Convolutional Neural Network, Machine Learning, Palm Oil FFB Ripeness Classification,Abstract
This research investigates the application of Convolutional Neural Network (CNN) for palm oil Fresh Fruit Bunch (FFB) ripeness grading recognition. CNN has become the state-of-the-art technique in computer vision especially in object recognition where the recognition accuracy is very impressive. Even though there is no need for feature extraction in CNN, it requires a large amount of training data. To overcome this limitation, utilising the pre-trained CNN model with transfer learning provides the solution. Thus, this research compares CNN, pre-trained CNN model and hand-crafted feature and classifier approach for palm oil Fresh Fruit Bunch (FFB) ripeness grading recognition. The hand-crafted features are colour moments feature, Fast Retina Keypoint (FREAK) binary feature, and Histogram of Oriented Gradient (HOG) texture feature with Support Vector Machine (SVM) classifier. Images of palm oil FFB with four different levels of ripeness have been acquired, and the results indicate that with a small number of sample data, pre-trained CNN model, AlexNet, outperforms CNN and the hand-crafted feature and classifier approach.References
Roseleena, J., Nursuriati, J., Ahmed, J., and Low, C. Y., “Assessment of palm oil fresh fruit bunches using a photogrammetric grading system,” International Food Research Journal 18(3), pp. 999-1005, 2011.
M. K. Shabdin, A.R. Mohamed Shariff, M. N. Azlan Johari, N. K Saat, and Z. Abbas, “A study on the oil palm fresh fruit bunch (FFB) ripeness detection by using Hue, Saturation and Intensity (HIS) approach”, 8th IGRSM International Conference, and Exhibition on Remote Sensing& GIS, 2016.
M. S. M. Alfatni, R. Shariff, M. Z. Abdullah, and O. M. B. Saaed, “Recognition System of Oil Palm Fruit Bunch Types Based on Texture and Image Processing,” Journal of Computational and Theoretical Nanoscience 19(12), pp. 3441-3444, 2013.
M. A. Hedhazi, I. Kourbane, and Y. Genc, “On Identifying leaves: A Comparison of CNN with classical ML methods,” IEEE 25th Signal Processing and Communications Applications Conference, 2017.
S. H. Lee, C. S. Chan, P. Wilkin, and P. Remagnino, “Deep-plant: Plant identification with convolutional neural networks,” IEEE International Conference on Image Processing (ICIP), 2015.
Y. D. Zhang, Z. D. Dong, X. Chen, W. Jia, S. Du, K. Muhammad and S. H. Wang, “Image-based fruit category classification by 13-layer deep convolutional neural network and data augmentation,” Journal of Multimedia Tools and Applications, Springer, pp. 1-20, 2017.
Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,”. Nature 521, pp. 436-444, 2015.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Networks.” Advances in Neural Information Processing Systems 25, 2012.
P. Wu, Z. Huang, and D. Li, “Research on the Character Recognition for Chinese License Plate Based on CNN,” 3rd IEEE International Conference on Computer and Communication, 2017.
L. G. Nachtigall, R. M. Araujo, and G. R. Nachtigall, “Classification of Apple Tree Disorders Using Convolutional Neural Networks”, IEEE International Conference on Tools with Artificial Intelligence,” 2016.
Q. Wang, C. Zhou, and N. Xu, “Street View Image Classification based on Convolutional Neural Network”, 2nd IEEE Advanced Information Technology, Electronic and Automation Control Conference, 2017.
S. Lim, S. Kim, and D. Kim, “Performance Effect Analysis for Insect Classification using Convolutional Neural Network,” 7th IEEE International Conference on Control System, Computing and engineering, 2017.
N. Sabri, Z. Ibrahim, S. Syahlan, N. Jamil, and N. A. Mangshor, “Palm Oil Fresh Fruit Bunch Ripeness Grading Identification using Color Features,” Journal of Fundamental and Applied Sciences, 2017.
J. Krizaj, V. Struc, S. Dobrisek, D. Marcetic, and S. Ribaric, “SIFT vs FREAK: Assessing the usefulness of two Keypoint descriptors for 3D face verification,” 37th International Convention on Information and Communication Technology, Electronics and Microelectronics, 2014.
S. An, and Q. Ruan, “3D facial expression recognition algorithm using local threshold binary pattern and histogram of oriented gradient,” IEEE 13th International Conference on Signal Processing, pp. 265- 270, 2016.
A. Mary, M. O. Chacko, and P. M. Dhanya, “A Comparative Study of Different Feature Extraction Techniques for Offline Malayalam Character Recognition,” Computational Intelligence in Data Mining, vol. 2, Springer, 2015.
A. Alahi, R. Ortiz, and P. Vandergheynst, “FREAK: Fast Retina Keypoint,” IEEE Conference on Computer Vision and Pattern Recognition, 2012.
N. Dalal, and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” IEEE Conference on Computer Vision and Pattern Recognition, 2005.
Downloads
Published
How to Cite
Issue
Section
License
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.