On-line learning of shape information

John Chiverton, Majid Mirmehdi, Xianghua Xie

Tracking of objects and simultaneously identifying an accurate outline of the tracked object is a complicated computer vision problem to solve because of the handschanging nature of the high-dimensional image information. Prior information is often included into models, such as probability distribution functions on a prior definition of shape to alleviate potential problems due to e.g. ambiguity as to what should actually be tracked in the image data. However supervised learning and or training is not always possible for new unseen objects or unforeseen configurations of shape, e.g. for silhouettes of 3-D objects. We are therefore interested and are currently investigating ways to include high-level shape information into active contour based tracking frameworks without a supervised pre-processing stage.

Bio-Inspired 3D Mapping

Geoffrey Daniels

Supervised by: David Bull, Walterio Mayol-Cuevas, J Burn

Using state of the art computer vision techniques and insights into the biological process used by animals to traverse any terrain a system has been created to enable a robotic platform to gather the information required to move safely throughout an unknown environment.  A goal of this project is to produce a system that can run in real-time upon an arbitrary locomotion platform and provide local route planning and hazard detection.  With the real-time aim in mind the core parts of the current algorithm have been developed using NVidia’s CUDA language for general purpose computing on GPUs as the code is embarrassingly parallel and GPUs can provide a huge speed increase for parallel processes. Currently without significant optimisation the system is able to compute the 3D surface ahead of the camera in approximately 100ms.

This system will be a module of a larger grant to develop a bio-inspired concept system for overall terrestrial perception and safe locomotion.

Interesting Results

Some example footage of the system generating a virtual 3D world from a single camera in real time:
https://www.youtube.com/watch?v=h36hVOerMFU&list=PLJmQZRbc9yWWrg6A0R_NFYHl6WP4FoNe9#t=15

Robust visual SLAM using higher-order structure

Jose Martinez Carranza, Andrew Calway

We are working with the problem of simultaneous localization and mapping using a single hand-held camera as unique sensor.  Specifically, looking at how to camviewjextract a richer representation of the environment using the map generated by the system and the visual information obtained with the camera.

We have developed a method to identify planar structures from the environment using as basis the cloud of points generated by the slam system and the appearance information contained in the images captured with the camera. Our method evaluates the validity of points in the map as part of a physical plane in the world using a statistical frame work that incorporates the uncertainty of estimations for both camera and the map obtained with the typical EKF based visual slam framework. Besides of the benefits of a better visual representation fo the map, we are investigating how to exploid this structures to improve the estimation.

Robust Visual SLAM for Fast Moving Platforms

Dr. Jose Martinez-Carranza

In the last years considerable progress has been achieved for what is known as visual Simultaneous localisation and Mapping (SLAM).

Visual SLAM is a technology that provides fast accurate 6D pose estimation of a moving camera and a 3D representation of the scene observed with the camera. Applications for this technology include: navigation in GPS-denied environments, virtual augmentation of objects in video footage, video-game interaction, etc.

Despite the achievements, there are still challenges to be faced. A practical one, but yet quite important, is that of using visual SLAM systems on platforms of low budget where computer power is reduced and memory is limited.

From the above, my main research focuses on the design of strategies that allow visual SLAM systems to keep working on slow budget platform without sacrificing the real-time response. This also includes maintaining robustness against loss of tracking, vibration, image blurred and strong change of light conditions.

Applications of my research are oriented to fast moving robotic platforms such as walking robots, mobile vehicles and Unmanned Aerial Vehicles (UAVs).

Full details about my ongoing research can be found here.