A Comparative Study of Features Extracted in the Classification of Human Skin Burn Depth
Keywords:Skin Burn, Classification, Feature Extraction, Image Mining Approach,
AbstractThe first burn treatment provided to patient is usually based on the first evaluation of the skin burn injury by determining the burn depths. In this paper, the objective is to conduct a comparative study of the different set of features extracted and used in the classification of different burn depths by using an image mining approach. Seven sets of global features and 5 local feature descriptors were studied on a skin burn dataset comprising skin burn images categorized into three burn classes by medical experts. The performance of the studied global and local features were evaluated using SMO, JRIP, and J48 on 10-fold cross validation method. The empirical results showed that the best set of features that was able to classify most of the burn depths consisted of mean of lightness, mean of hue, standard deviation of hue, standard deviation of A* component, standard deviation of B* component, and skewness of lightness with an average accuracy of 77.0% whereas the best descriptor in terms of local features for skin burn images was SIFT, with an average accuracy of 74.7%. It can be concluded that a combination of global and local features is able to provide sufficient information for the classification of the skin burn depths.
“Boundless Anatomy and Physiology in Structure of the Skin: Dermis,” 2016. [Online]. Available: https://www.boundless.com/physiology/textbooks/boundlessanatomy-and-physiology-textbook/integumentary-system-5/the-skin- 64/structure-of-the-skin-dermis-395-7489/.
“Burn Classification,” UNM hospitals. [Online]. Available: http://hospitals.unm.edu/burn/classification.shtml.
A. L. Mescher, Junqueira’s Basic Histology, 14th ed. McGraw-Hill Education, 2016.
M. S. Badea, C. Vertan, C. Florea, L. Florea, and S. Badoiu, “Automatic burn area identification in colour images,” 2016 International Conference on Communications (COMM). Institute of Electrical and Electronics Engineers (IEEE), pp. 65–68, 2016.
D. A. Lisin, M. A. Mattar, M. B. Blaschko, M. C. Benfield, and E. G. Learned-miller, “Combining Local and Global Image Features for Object Class Recognition,” in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005.
“Local Feature Detection and Extraction,” MathWorks. [Online]. Available: https://www.mathworks.com/help/vision/ug/local-featuredetection-and-extraction.html.
K. Murphy, A. Torralba, D. Eaton, and W. Freeman, “Object detection and localization using local and global features,” Toward CategoryLevel Object Recognition - Lecture Notes in Computer Science, vol. 4170, pp. 382–400, 2006.
B. Acha, C. Serrano, and J. I. Acha, “Segmentation of burn images using the L*u*v* space and classification of their depths by colour and texture imformation,” Medical Imaging 2002: Image Processing, pp. 1508–1515, 2002.
B. Acha, C. Serrano, J. I. Acha, and L. M. Roa, “CAD tool for burn diagnosis,” Biennial International Conference on Information Processing in Medical Imaging, pp. 294–305, 2003.
B. Acha, C. Serrano, J. I. Acha, and L. M. Roa, “Segmentation and classification of burn images by colour and texture information,” Journal of Biomedical Optics, vol. 10, no. 3, pp. 34014–3401411, 2005.
C. Serrano, B. Acha, T. Gómez-Cía, J. I. Acha, and L. M. Roa, “A computer assisted diagnosis tool for the classification of burns by depth of injury,” Burns, vol. 31, no. 3, pp. 275–281, 2005.
R. Sudhir, “A survey on image mining techniques: Theory and applications,” Computer Engineering and Intelligent Systems, vol. 2, no. 6, pp. 44–52, Oct. 2011.
A. Khosla, T. Zhou, T. Malisiewicz, A. Efros, and A. Torralba, “Undoing the damage of dataset bias,” Oct. 2012.
A. Khosla, J. Xiao, A. Torralba, and A. Oliva, “Memorability of Image Regions,” Advances in Neural Information Processing Systems, no. 1, pp. 296–304, 2012.
B. N. Manu, “Brain MRI Tumor Detection and Classification,” MathWorks. 2016.
K. Wantanajittikul, S. Auephanwiriyakul, N. Theera-Umpon, and T. Koanantakool, “Automatic segmentation and degree identification in burn colour images,” The 4th 2011 Biomedical Engineering International Conference (BMEiCON). Institute of Electrical and Electronics Engineers (IEEE), pp. 169–173, 2012.
L. Deepak, J. Antony, and C. Niranjan U, “Hardware Co-Simulation of skin burn image analysis,” 19th IEEE International Conference in High Performance Computing (HiPC-2012): Student Research Symposium. Pune, India. 2012.
M. Suvarna, S. Kumar, and N. U C, “Classification Methods of Skin Burn Images,” International Journal of Computer Science and Information Technology, vol. 5, no. 1, pp. 109–118, 2013.
M. Suvarna, K. Kumar, Sivakumar, and N. U. C, “Diagnosis of burn images using template matching, k-nearest neighbor and artificial neural network,” International Journal of Image Processing (IJIP), vol. 7, no. 2, 2013.
J. van de Weijer, C. Schmid, J. Verbeek, and D. Larlus, “Learning colour names for real-world applications,” IEEE Transactions on Image Processing, vol. 18, no. 7, pp. 1512–1523, 2009.
R. Khan, J. Van De Weijer, F. S. Khan, D. Muselet, C. Ducottet, and C. Barat, “Discriminative colour descriptors,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2866–2873, 2013.
A. Oliva and A. Torralba, “Modeling the shape of the scene: A holistic representation of the spatial envelope,” International Journal of Computer Vision, vol. 42, no. 3, pp. 145–175, 2001.
N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. I, pp. 886–893, 2005.
T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution Gray Scale and Rotation Invariant Texture Classification with Local Binary Patterns,” 2002.
D. G. LOWE, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
C. Elkan, “Using the Triangle Inequality to Accelerate k-Means,” Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), pp. 147–153, 2003.
J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong, “Localityconstrained Linear Coding for Image Classification,” 2010.
S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features : spatial pyramid matching for recognizing natural scene categories,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), 2006.
E. Frank, M. A. Hall, and I. H. Witten, The WEKA Workbench. Online Appendix for "Data Mining: Practical Machine Learning Tools and Techniques", Morgan Kaufmann, Fourth Edition, 2016.
How to Cite
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.