Key Frame Generation to Generate Activity Strip Based on Similarity Calculation
Keywords:Similarity, Key Frame, Activity Strip, Activity Generation,
AbstractManagement of video data is done for several purposes, such as to make the information more meaningful. Research has been conducted to manage the video in terms of detecting activity in a video. There are three stages to generate activity strip: the data source stage (preparation of the frames), the processing stage (analysis of the activity), and the final stage (the collection of key frames). The generation of activity strip is done by calculating the difference of the pixel values of two frames to detect a similarity. In this research, we used SAD (Sum of Absolute Difference) method to calculate the value of the difference of the frame. Similar frames can be grouped in the same cluster. Each cluster is considered as one frame (or multiple frames) to serve as a key frame. The key frames are used for the representation of the activity strip. A collection of activity strip will be arranged sequentially and continuously for the activity generation.
B.T. Truong, S. Venkatesh, Video abstraction: A systematic review and classification, ACM Transactions on Multimedia Computing Communications and Applications (TOMCCAP), vol. 3, no. 1, (2007) 1-37.
W. Sabbar, A. Chergui, A. Bekkhoucha, Video summarization using shot segmentation and local motion estimation, IEEE Second International Conference on Innovative Computing Technology (INTECH), (2012) 190-193.
S. Angadi, V. Naik, Entropy based fuzzy C means clustering and key frame extraction for Sports Video Summarization, 2014 Fifth International Conference on Signal and Image Processing (ICSIP), (2014) 271-279.
W. Widiarto, E.M. Yuniarno, M. Hariadi, Video summarization using a key frame selection based on shot segmentation, 2015 International Conference on Science in Information Technology (ICSITech), (2015) 207-212.
Z. Rasheed, M. Shah, Detection and representation of scenes in videos, IEEE Transactions on Multimedia, vol. 11, no. 6, (2005) 1097-1105.
A. Girgensohn, J.S. Boreczky, Time-constrained keyframe selection technique, Multimedia tools application, vol. 11, no. 3, (2000) 347-358.
M. Omidyeganeh, S. Ghaemmaghami, S. Shirmohammadi, Video keyframe analysis using a segment-based statistical metric in a visually sensitive parametric space, IEEE Transactions on image processing, vol. 20, no. 10, (2011) 2730-2737.
K. Mahesh, K. Kuppusamy, A new hybrid video segmentation algorithm using fuzzy c means clustering, IJCSI International Journal of Computer Science Issues, vol. 9, issue 2, no. 1, (2012) 229-237.
K. Mahesh, K. Kuppusamy, Video segmentation using hybrid segmentation method, European Journal of Scientific Research, vol. 71, no. 3, (2012) 312-326.
D. Besiris, F. Fotopoulou, G. Economou, S. Fotopoulos, Video summarization by a graph-theoretic FCM based algorithm, IEEE conference publications, 15th International conference on systems, signal and image processing (IWSSIP) 2008, (2008) 511-514.
Y. Song, T. Ogawa, M. Haseyama, MCMC-based scene segmentation method using structure of video, International Symposium on Communications and Information Technologies (ISCIT), (2010) 862-866.
Y. Zhao, T. Wang, P. Wang, W. Hu, Y. Du, Y. Zhang, G. Xu, Scene Segmentation and Categorization Using NCuts, 2007 IEEE Conference on Computer Vision and Pattern Recognition, (2007) 1-7.
M. Tavassolipour, M. Karimian, S. Kasaei, Event Detection and Summarization in Soccer Videos Using Bayesian Network and Copula, IEEE Transactions on circuits and systems for video technology, Vol. 24, No. 2, (2014) 291-304.
C.L. Huang, H.C. Shih, C.Y. Chan, Semantic analysis of soccer video using dynamic Bayesian network, IEEE Transactions on Multimedia, vol. 8, no. 4, (2006) 749-760.
Y. Zhu, Z. Ming, SVM-based video scene classification and segmentation, International Conference on Multimedia and Ubiquitous Engineering 2008 (MUE 2008), (2008) 407-412.
R. Burget, J.K. Rai, V. Uher, J. Masek, M.K. Dutta, Supervised video scene segmentation using similarity measures, 2013 36th International Conference on Telecommunications and Signal Processing (TSP), (2013) 793-797.
C.R. Huang, C.S. Chen, Video scene detection by link-constrained affinity-propagation, IEEE International comference, (2009) 2834-2837.
P. Mundur, Y. Rao, Y. Yesha, Keyframe-based video summarization using Delaunay clustering, International Journal on Digital Libraries, vol. 6, no. 2, (2006) 219-232.
Y. Hadi, F. Essannouni, R.O.H. Thami, Video summarization by kmedoid clustering, Proceedings of the ACM symposium on Applied Computing – SAC ’06, (2006) 1400-1401.
W. Widiarto, M. Hariadi, E.M. Yuniarno, Shot segmentation of video animation to generate comic strip based on key frame selection, 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), (2015) 303-308.
N. Haering, R.J. Qian, M.I. Sezan, A semantic event-detection approach and its application to detecting hunts in wildlife video, IEEE Transactions on Circuits System Video Technology, vol. 10, no. 6, (2000) 857-868.
How to Cite
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.