Browsing by Author "Abeysinghe, C"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
- item: Conference-Extended-AbstractAugmented reality for mobile devices to show information of exhibits at a museum(2011) Abeysinghe, C; Jayasiri, V; Sandaruwan, K; Wijerathna, B; Pasqual, AAAugmented Reality (AR) is one of the emerging technologies being used for enhancing user experience in many applications. Specific devices for various tasks are been replaced by Smart Phones making them a unique platform to implement all functionalities in one device. In this paper we present a real-time AR framework for mobile devices that takes into consideration key technical challenges such as limited processing power and battery life. The framework is having four separate modules, Marker Detection, Identification, Camera pose calculation and Embedding visual information. A 2D pattern called a marker is used to uniquely identify the objects. The marker in the camera view is detected and tracked so that information contained in the marker is exploited. Tracked four corners of the marker are used to calculate the 6 DOF of camera position, which is further processed to place 3D graphics on the real scene with accurate rotation and translation. To demonstrate the capabilities of our AR Framework we have developed an application for iPhones, which highlights significant information in exhibits.
- item: Conference-Full-textAutomated vehicle parking occupancy detection in real-time(IEEE, 2020-07) Padmasiri, H; Madurawe, R; Abeysinghe, C; Meedeniya, D; Weeraddana, C; Edussooriya, CUS; Abeysooriya, RPParking occupancy detection systems help to identify the available parking spaces and direct vehicles efficiently to unoccupied lots by reducing time and energy. This paper presents an approach for the design and development of an end-to-end automated vehicle parking occupancy detection system. The novelty of this study lies in the methodology followed for the object detection process using RetinaNet one stage detector and region-based convolutional neural network deep learning technique. The proposed software architecture consists of low coupled components that support scalability and reliability. The developed web-based and mobile-based client applications assist to find parking spaces easily and efficiently. The existing solutions utilize dedicated sensors and depend on manual segmentation of surveillance footage to detect the state of parking spaces. The proposed approach eliminates existing limitations while maintaining reasonable accuracy.
- item: Conference-Full-textHybrid approach and architecture to detect fake news on twitter in real-time using neural networks(Faculty of Information Technology, University of Moratuwa., 2020-12) Thilakarathna, MP; Wijayasekara, VA; Gamage, Y; Peiris, KH; Abeysinghe, C; Rafaideen, I; Vekneswaran, P; Karunananda, AS; Talagala, PDFake news has been a key issue since the dawn of social media. Currently, we are at a stage where it is merely impossible to differentiate between real and fake news. This directly and indirectly affects people's decision patterns and makes us question the credibility of the news shared via social media platforms. Twitter is one of the leading social networks in the world by active users. There has been an exponential spread of fake news on Twitter in the recent past. In this paper, we will discuss the implementation of a browser extension which will identify fake news on Twitter using deep learning models with a focus on real-world applicability, architectural stability and scalability of such a solution. Experimental results show that the proposed browser extension has an accuracy of 86% accuracy in fake news detection. To the best of our knowledge, our work is the first of its kind to detect fake news on Twitter real-time using a hybrid approach and evaluate using real users.
- item: Conference-AbstractVideo colorization dataset and benchmarkAbeysinghe, C; Wijesinghe, T; Wijesinghe, C; Jayathilake, L; Thayasivam, UVideo colorization is the process of assigning realistic, plausible colors to a grayscale video. Compared to its peer, image colorization, video colorization is a relatively unexplored area in computer vision. Most of the models available for video colorization are extensions of image colorization, and hence are unable to address some unique issues in video domain. In this paper, we evaluate the applicability of image colorization techniques for video colorization, identifying problems inherent to videos and attributes affecting them. We develop a dataset and benchmark to measure the effect of such attributes to video colorization quality and demonstrate how our benchmark aligns with human evaluations.