Semantic Object Detection for Human Activity Monitoring System
Keywords:
Semantic Object Detection, Activity Recognition, Image Understanding,Abstract
Semantic object detection is significant for activity monitoring system. Any abnormalities occurred in a monitored area can be detected by applying semantic object detection that determines any displaced objects in the monitored area. Many approaches are being made nowadays towards better semantic object detection methods, but the approaches are either resource consuming such as using sensors that are costly or restricted to certain scenarios and background only. We assume that the scale structures and velocity can be estimated to define a different state of activity. This project proposes Histogram of Oriented Gradient (HOG) technique to extract feature points of semantic objects in the monitored area while Histogram of Oriented Optical Flow (HOOF) technique is used to annotate the current state of the semantic object that having human-andobject interaction. Both passive and active objects are extracted using HOG, and HOOF descriptor indicate the time series status of the spatial and orientation of the semantic object. Support Vector Machine technique uses the predictors to train and test the input video and classify the processed dataset to its respective activity class. We evaluate our approach to recognise human actions in several scenarios and achieve 89% accuracy with 11.3% error rate.References
Debes C., Merentitis A., Sukhanov S., Niessen M., Frangiadakis N., and Bauer A. Monitoring Activities of Daily Living in Smart Home. IEEE Signal Processing Magazine. 2016. 81 - 94.
Pirsiavash H. and Ramanan D. Detecting Activities of Daily Living in First-person Camera Views. IEEE International Conference. Computer Vision and Pattern Recognition (CVPR): IEEE. 2012. 2847 - 2854.
Dasiopoulou S., Mezaris V., Kompatsiaris I., Papastathis V. K. and Strintzis, M. G. Knowledge-assisted Semantic Video Object Detection. IEEE Transactions on Circuits and Systems for Video Technology. 2005. 5(10): 1210 - 1224.
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in CVPR, 2014.
Ke S., Thuc H., Lee Y., Hwang J., Yoo J. and Choi K. A Review On Video-Based Human Activity Recognition. Computers. 2013. 88 - 131.
D. Lowe. Object recognition from local scale-invariant features. In ICCV, 1999.
H. Wang, M. Ullah, A. Klaser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features for action recognition. In BMVC, 2009
Adriana Kovashka and Kristen Grauman, Learning a Hierarchy of Discriminative Space-Time Neighborhood Features for Human Action Recognition , proc. IEEE COnference on Computer Vision and pattern Recognition, 2010. 1-8.
González-Díaz I., Buso V., Benois-Pineau J., Bourmaud G., and Mégret R. Modelling Instrumental Activities of Daily Living in Egocentric Vision as Sequences of Active Objects and Context for Alzheimer Disease Research. 1st ACM MM Workshop. Multimedia Indexing And Information Retrieval: ACMMM'13. 2013. 11 - 14.
Rybok L., Friedberger S., Hanebeck U. D. and Stiefelhagen R. The KIT Robo-Kitchen Data set for the Evaluation of View-based Activity Recognition Systems. IEEE-RAS International Conference. Humanoid Robots (Humanoids): IEEE. 2011. 128 - 133.
J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learning of human action categories using spatial-temporal words. International Journal of Computer Vision, 79:299–318, 2008.
M. Sadeghi and A. Farhadi. Recognition using visual phrases. In CVPR, 2011.
Downloads
Published
How to Cite
Issue
Section
License
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.