Combined static and motion features for deep-networks-based activity recognition in videos

dc.contributor.authorRamasinghe, S
dc.contributor.authorRajasegaran, J
dc.contributor.authorJayasundara, V
dc.contributor.authorRanasinghe, K
dc.contributor.authorRodrigo, R
dc.contributor.authorPasqual, AA
dc.date.accessioned2023-04-20T08:51:56Z
dc.date.available2023-04-20T08:51:56Z
dc.date.issued2019
dc.description.abstractActivity recognition in videos in a deep-learning setting—or otherwise—uses both static and pre-computed motion components. The method of combining the two components, whilst keeping the burden on the deep network less, still remains uninvestigated. Moreover, it is not clear what the level of contribution of individual components is, and how to control the contribution. In this work, we use a combination of CNNgenerated static features and motion features in the form of motion tubes. We propose three schemas for combining static and motion components: based on a variance ratio, principal components, and Cholesky decomposition. The Cholesky decomposition based method allows the control of contributions. The ratio given by variance analysis of static and motion features match well with the experimental optimal ratio used in the Cholesky decomposition based method. The resulting activity recognition system is better or on par with existing state-of-theart when tested with three popular datasets. The findings also enable us to characterize a dataset with respect to its richness in motion information.en_US
dc.identifier.citationRamasinghe, S., Rajasegaran, J., Jayasundara, V., Ranasinghe, K., Rodrigo, R., & Pasqual, A. A. (2019). Combined static and motion features for deep-networks-based activity recognition in videos IEEE Transactions on Circuits and Systems for Video Technology, 29(9), 2693–2707. https://doi.org/10.1109/TCSVT.2017.2760858en_US
dc.identifier.databaseIEEE Xploreen_US
dc.identifier.doi10.1109/TCSVT.2017.2760858en_US
dc.identifier.issn1051-8215en_US
dc.identifier.issue9en_US
dc.identifier.journalIEEE Transactions on Circuits and Systems for Video Technologyen_US
dc.identifier.pgnos2693 - 2707en_US
dc.identifier.urihttp://dl.lib.uom.lk/handle/123/20900
dc.identifier.volume29en_US
dc.identifier.year2019en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectActivity recognitionen_US
dc.subjectFusing featuresen_US
dc.subjectConvolutional Neural Networks (CNN)en_US
dc.subjectRecurrent Neural Networks (RNN)en_US
dc.subjectLong Short-Term Memory (LSTM)en_US
dc.titleCombined static and motion features for deep-networks-based activity recognition in videosen_US
dc.typeArticle-Full-texten_US

Files