Oil Palm Fruit Image Ripeness Classification with Computer Vision using Deep Learning and Visual Attention
Keywords:
Computer Vision, Convolutional Neural Network, Oil Palm Fruit Classification, Visual Attention,Abstract
Oil palm is one of the leading agricultural industries, especially in the South East Asian region. However, oil palm fruit ripeness classification based on computer vision has not gained many satisfactory results. Therefore, most of the ripeness sorting processes are still done manually by labor works. The objective of this research is to develop a model using a residual-based attention mechanism that could recognize the small detail differences between images. Thus, the model could classify oil palm fruit ripeness better. The dataset consists of 400 images with seven levels of ripeness. Since the number of images in the dataset, Ten Crop preprocessing is utilized to augment the data. The experiment showed that the proposed model ResAtt DenseNet model, which uses residual visual attention could improve the F1 Score by 1.1% compared to the highest F1 Score from other models in the experiment of this study.References
T. W. Cenggoro, A. Budiarto, R. Rahutomo, and B. Pardamean, “Information System Design for Deep Learning Based Plant Counting Automation,” 1st 2018 Indones. Assoc. Pattern Recognit. Int. Conf. Ina. 2018 - Proc., pp. 329–332, 2019.
R. Rahutomo, A. S. Perbangsa, Y. Lie, T. W. Cenggoro, and B. Pardamean, “Artificial Intelligence Model Implementation in WebBased Application for Pineapple Object Counting,” in 2019 International Conference on Information Management and Technology (ICIMTech), 2019, no. August, pp. 525–530.
“Palm Oil Exports By Country,” Index Mundi, 2014. [Online]. Available: http://www.indexmundi.com/agriculture/?commodity=palmoil&graph=exports.
O. M. Bensaeed, A. M. Shariff, A. B. Mahmud, H. Shafri, and M. Alfatni, “Oil palm fruit grading using a hyperspectral device and machine learning algorithm,” IOP Conf. Ser. Earth Environ. Sci., vol. 20, no. 1, 2014.
M. S. M. Alfatni, A. R. M. Shariff, H. Z. M. Shafri, O. M. Ben Saaed, and O. M. Eshanta, “Oil Palm Fruit Bunch Grading System Using Red, Green and Blue Digital Number,” Journal of Applied Sciences, vol. 8, no. 8. pp. 1444–1452, 2008.
N. Jamil, A. Mohamed, and S. Abdullah, “Automated grading of palm oil Fresh Fruit Bunches (FFB) using neuro-fuzzy technique,” SoCPaR 2009 - Soft Comput. Pattern Recognit., no. July 2015, pp. 245–249, 2009.
N. Fadilah and J. Mohamad-Saleh, “Color feature extraction of oil palm fresh fruit bunch image for ripeness classification,” 13th Int. Conf. Appl. Comput. Appl. Comput. Sci., pp. 51–55, 2014.
W. I. W. Ishak and R. M. Hudzari, “Image based modeling for oil palm fruit maturity prediction,” J. Food, Agric. Environ., vol. 8, no. 2, pp. 469–476, 2010.
Harsawardana et al., “AI-Based Ripeness Grading for Oil Palm Fresh Fruit Bunch in Smart Crane Grabber,” IOP Conf. Ser. Earth Environ. Sci., vol. 426, p. 012147, Mar. 2020.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Neural Inf. Process. Syst., 2012.
Z. Ibrahim, N. Sabri, and D. Isa, “Palm Oil Fresh Fruit Bunch Ripeness Grading Recognition Using Convolutional Neural Network,” J. Telecommun. Electron. Comput. Eng., vol. 10, no. 3, pp. 109–113, 2018.
Y. LeCunn et al., “Backpropagation Applied to Handwrittern Zip Code Recognition,” Neural Comput., vol. 1, pp. 541–551, 1989.
Jia Deng, Wei Dong, R. Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, “ImageNet: A large-scale hierarchical image database,” 2009 IEEE Conf. Comput. Vis. Pattern Recognit., pp. 248–255, 2009.
O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proc. IEEE Conf. Comput. Vis. pattern Recognit., pp. 770–778, 2016.
G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261– 2269.
K. K. Evans, “Visual Attention,” Wiley interdiciplinary Rev. Cogn. Sci., vol. 2, no. 5, pp. 503–514, 2011.
K. Xu et al., “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention,” IEEE Trans. Neural Networks, vol. 5, no. 2, pp. 157–166, Feb. 2015.
S. M. A. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton, “Attend, Infer, Repeat: Fast Scene Understanding with Generative Models,” 2016.
Z. Chen, L. Liu, I. Sa, Z. Ge, and M. Chli, “Learning Context Flexible Attention Model for Long-term Visual Place Recognition,” IEEE Robot. Autom. Lett., vol. PP, no. c, pp. 1–1, 2018.
J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-Based Models for Speech Recognition,” Adv. Sp. Res., vol. 55, no. 11, pp. 2493–2499, Jun. 2015.
L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille, “Attention to Scale: Scale-aware Semantic Image Segmentation,” Proc. IEEE Conf. Comput. Vis. pattern Recognit., pp. 3640–3649, 2016.
Q. Xiao, G. Li, L. Xie, and Q. Chen, “Real-world plant species identification based on deep convolutional neural networks and visual attention,” Ecol. Inform., vol. 48, pp. 117–124, 2018.
T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 936–944, 2017.
J. Hu, L. Shen, and G. Sun, “Squeeze-and-Excitation Networks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 7132– 7141, 2018.
B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik, “Hypercolumns for object segmentation and fine-grained localization,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, no. November 2014, pp. 447–456, 2015.
C. Couprie, L. Najman, and Y. Lecun, “Learning Hierarchical Features for Scene Labeling,” Pattern Anal. Mach. Intell. IEEE Trans., vol. 35, no. 8, pp. 1915–1929, 2013.
A. Krizhevsky and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, 2012.
C. Yan, J. Yao, R. Li, Z. Xu, and J. Huang, “Weakly Supervised Deep Learning for Thoracic Disease Classification and Localization on Chest X-rays,” in Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics - BCB ’18, 2018, pp. 103–110.
X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases,” 2017.
B. Pardamean, T. W. Cenggoro, R. Rahutomo, A. Budiarto, and E. K. Karuppiah, “Transfer Learning from Chest X-Ray Pre-trained Convolutional Neural Network for Learning Mammogram Data,” Procedia Comput. Sci., vol. 135, pp. 400–407, 2018.
L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille, “Attention to Scale: Scale-Aware Semantic Image Segmentation,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3640–3649.
S. Kornblith, J. Shlens, and Q. V. Le, “Do better imagenet models transfer better?,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 2656–2666, 2019.
D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in The International Conference on Learning Representations 2015, 2015.
Downloads
Published
How to Cite
Issue
Section
License
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.