Browsing by Author "Jayasundara, V"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
- item: Article-Full-textCombined static and motion features for deep-networks-based activity recognition in videos(IEEE, 2019) Ramasinghe, S; Rajasegaran, J; Jayasundara, V; Ranasinghe, K; Rodrigo, R; Pasqual, AAActivity recognition in videos in a deep-learning setting—or otherwise—uses both static and pre-computed motion components. The method of combining the two components, whilst keeping the burden on the deep network less, still remains uninvestigated. Moreover, it is not clear what the level of contribution of individual components is, and how to control the contribution. In this work, we use a combination of CNNgenerated static features and motion features in the form of motion tubes. We propose three schemas for combining static and motion components: based on a variance ratio, principal components, and Cholesky decomposition. The Cholesky decomposition based method allows the control of contributions. The ratio given by variance analysis of static and motion features match well with the experimental optimal ratio used in the Cholesky decomposition based method. The resulting activity recognition system is better or on par with existing state-of-theart when tested with three popular datasets. The findings also enable us to characterize a dataset with respect to its richness in motion information.
- item: Article-Full-textDevice-free user authentication, activity classification and tracking using passive WI-fi sensing: a deep learning-based approach(IEE, 2020) Jayasundara, V; Jayasekara, H; Samarasinghe, T; Hemachandra, KTGrowing concerns over privacy invasion due to video camera based monitoring systems have made way to non-invasive Wi-Fi signal sensing based alternatives. This paper introduces a novel end-to-end deep learning framework that utilizes the changes in orthogonal frequency division multiplexing (OFDM) sub-carrier amplitude information to simultaneously predict the identity, activity and the trajectory of a user and create a user profile that is of similar utility to a one made through a video camera based approach. The novelty of the proposed solution is that the system is fully autonomous and requires zero user intervention unlike systems that require user originated initialization, or a user held transmitting device to facilitate the prediction. Experimental results demonstrate over 95% accuracy for user identification and activity recognition, while the user localization results exhibit a ±12cm error, which is a significant improvement over the existing user tracking methods that utilize passive Wi-Fi signals.
- item: Article-Full-textPointCaps: Raw point cloud processing using capsule networks with Euclidean distance routing(Elsevier, 2022) Denipitiyage, D; Jayasundara, V; Rodrigo, R; Edussooriya, CUSRaw point cloud processing using capsule networks is widely adopted in classification, reconstruction, and segmentation due to its ability to preserve spatial agreement of the input data. However, most of the existing capsule based network approaches are computationally heavy and fail at representing the entire point cloud as a single capsule. We address these limitations in existing capsule network based approaches by proposing PointCaps, a novel convolutional capsule architecture with parameter sharing. Along with PointCaps, we propose a novel Euclidean distance routing algorithm and a class-independent latent representation. The latent representation captures physically interpretable geometric parameters of the point cloud, with dynamic Euclidean routing, PointCaps well-represents the spatial (point-to-part) relationships of points. PointCaps has a significantly lower number of parameters and requires a significantly lower number of FLOPs while achieving better reconstruction with comparable classification and segmentation accuracy for raw point clouds compared to state-of-the-art capsule networks.