Browsing by Author "Samarawickrama, JG"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
- item: Conference-AbstractGPU based non-overlapping multi-camera vehicle trackingGamage, TD; Samarawickrama, JG; Pasqual, AAVehicle tracking and surveillance is an area which is having a considerable attention in the context of security and safety. The detection and tacking of moving vehicles through multiple cameras is considered as a method of vehicle surveillance. This work addresses a problem of detecting and matching vehicles through multiple cameras. The power of GPUs are used to increase the number of video streams which can be processed using a single computer. In the detection process the Gabor filter is used as a directional filter and the SURF is used by the matcher to uniquely represent the vehicle.
- item: Conference-AbstractImage Filtering with MapReduce in Pseudo-Distribution Mode(2015-08-14) Gamage, TD; Samarawickrama, JG; Rodrigo, R; Pasqual, AAThe massive volume of video and image data, compels them to be stored in a distributed file system. To process the data stored in the distributed file system, Google proposed a programming model named MapReduce. Existing methods of processing images held in such a distributed file system, requires whole image or a substantial portion of the image to be streamed every time a filter is applied. In this work an image filtering technique using MapReduce programming model is proposed, which only requires the image to be streamed only once. The proposed technique extends for a cascade of image filters with the constrain of a fixed kernel size. To verify the proposed technique for a single filter a median filter is applied on an image with salt and pepper noise. In addition a corner detection algorithm is implemented with the use of a filter cascade. Comparison of the results of noise filtering and corner detection with the corresponding CPU version show the accuracy of the method.
- item: Conference-AbstractImage filtering with MapReduce in pseudo-distribution modeGamage, TD; Samarawickrama, JG; Rodrigoz, R; Pasqual, AAThe massive volume of video and image data,compels them to be stored in a distributed file system. To process the data stored in the distributed file system, Google proposed a programming model named MapReduce. Existing methods of processing images held in such a distributed file system, requires whole image or a substantial portion of the image to be streamed every time a filter is applied. In this work an image filtering technique using MapReduce programming model is proposed, which only requires the image to be streamed only once. The proposed technique extends for a cascade of image filters with the constrain of a fixed kernel size. To verify the proposed technique for a single filter a median filter is applied on an image with salt and pepper noise. In addition a corner detection algorithm is implemented with the use of a filter cascade. Comparison of the results of noise filtering and corner detection with the corresponding CPU version show the accuracy of the method.
- item: Thesis-AbstractMultiple Degree of Freedom Stereo Camera Platform for Active Vision Designing the Core Architecture of the Processor(2016-05-18) Samarawickrama, JG; Pasqual, AAVision is our most powerful sense. It provides us remarkable amount of information about our surrounding and enables us to interact intelligently with the environment, all without direct physical contact. Vision is also our most complicated sense. The knowledge we have accumulated about how biological vision systems operate is still fragmentary. Nature has proven to be capable of creating versatile and flexible vision systems, which are much more efficient than all artificial vision systems already designed. Therefore, the comprehension of some of the biological principles of vision has brought important ideas and concepts for the development of computational vision. One of the main goals of the research is to develop a high-performance stereo active vision head ,that can be used for studying human vision. The head consists of two eye modules and a neck module on which the two eyes are mounted. The camera platform has a total number of seven degrees of freedom, three in its neck and two in its each eye. Stepper motors with custom built gear wheels are used to drive all the degrees of freedom and the motors are used in a closed loop control system with sequential optical encoders for providing position feedback. This research also focuses on implementing an FPGA (Field Programmable Gate Array) based microprocessor that interprets the instructions given by the user, calculates all the necessary parameters for driving the motors as required and controls the motors accordingly. This stand alone processor includes several floating-point units operating in parallel with the other motor control units. A total no of five floating point operations can be done in parallel that consists of 4 addition( or subtraction) and one multiplication( or division)operations. In addition, a CORDIC processor also runs in parallel to calculate trigonometric functions and root squares. With altogether the processor gives grater improvement to the performance in terms of speed exploiting parallelism. The results show that the FPGA based microprocessor for controlling the multi degree of freedom stereo vision head is very efficient for active vision.
- item: Conference-AbstractMultiple objects tracking with a surveillance camera system(2011) Bandaragoda, TR; Dilhari, UDC; Kumarasinghe, CS; Mallikarachchi, DTR; Samarawickrama, JG; Pasqual, AAIncreasing use of CCTV for city and building surveillance has given rise to an environment where an object (a person) might traverse through the field of vi>vv of many cameras. In this paper we explore the problem of tracking multiple objects in a multi camera environment, which is a highly addressed area in computer vision. Our research involves real time tracking of objects while they are moving in a multi camera environment with non-overlapping field of v/ewjr and detecting them when they re-uppear in the same or another camera in the same system. Previous methods of using offline trained classifiers with huge databases are time consuming and have the drawback of incapability of detecting arbitrarily selected objects. We address this issue by online training with the initial sample given and is based on the TLD (Tracking, Learning, Detection) framework. We extend the idea to formulate our methodology to create a framework that can track multiple objects in multiple video streams in real time. We have developed upper layers as a thread based architecture in order to incorporate multiple video feeds and to handle multiple objects. We have integrated CUDA (Computer Unified Device Architecture) programming model to add parallelism to independent processes and execute compute intensive algorithms. GPU computing offers an ideal computing environment to improve our framework. Our optimization of the algorithms, careful usage of parallel computing and proper utilization of GPU resources have contributed in achieving a processing time of less than 60ms for multi objects in multi camera environment.
- item: Thesis-AbstractReal-time object tracking and surveillance using a parallel computeing Architecture(2015-03-01) Gamage, TD; Samarawickrama, JG; Pasqual, AClosed-circuit television (CCTV) cameras are used widely in surveillance applications where operators need to constantly monitor the videos on the video wall. The objective of this research is to improve the efficiency of the personal who monitor the videos in vehicle surveillance applications. Two types of vehicle surveillance are considered: the detection of vehicles coming to a stop, and trackingmoving vehicles through multiple cameras. The event of a vehicle coming to a stop occurs in situations such as vehicles stop at the toll plaza at express ways or car parks. The purpose of detecting a vehicle coming to a stop is to minimize frauds which may occur during the toll collection process. The approach to minimize such frauds is by using the vehicle count as a reference. The use ofGraphics ProcessingUnit (GPU)s to process the videos reduces the average execution time from0.096s to 0.075s. The detection and tracking moving vehicle through multiple cameras are considered as the second type of vehicle surveillance. These multiple cameras are fixed in different locations and the same vehicle may appear on different cameras in different times. It is a tedious process to manually track these vehicles through nonoverlapping cameras. In the approach of tracking moving vehicles throughmultiple cameras the processing power of GPUs are used. GPUs parallelize the detection algorithm to achieve the real time performance for two video streams which are processed concurrently. The algorithm which matches the vehicles through multiple cameras gives an accuracy of over 80%. In the events of detecting a vehicle coming to a stop and detecting and tracking moving vehicles through multiple cameras, the processing power of GPUs are used to reduce the processing time of a frame to achieve the real time performance.