Hardware-accelerated Video Fusion

This projects aim at producing a low-power demonstrator for real-time video fusion using a hybrid SoC device that combines a low-power Cortex A9 multi-core processor and a FPGA fabric. The methodology involves using a fusion algorithm developed at Bristol based on Complex dual-tree wavelet transforms.  These transforms work in forward and inverse mode together with configurable fusion rules to offer high quality fusion output.

The complex dual-tree wavelet transforms represents  around 70% of total complexity. The wavelet accelerator designed at Bristol removes this complexity and accelerates the whole application by a factor of x4.  It also has a significant positive impact in overall energy. There is a negligible increase in power due to the fact that the fabric works in parallel with the main processor. Notice that if the optimization criteria is not performance or energy but power then the processor and fabric could reduce its clock frequency and voltage and obtain a significant reduction in power for the same energy and performance levels.

This project has built a system extended with frame capturing capabilities using thermal and visible light cameras.  In this link you can see the system working in our labs : hardware accelerated video fusion

This project has been funded by the Technology Strategy Board under their energy-efficient computers program with Qioptiq Ltd as industrial collaborator.

This research will be presented and demonstrated at FPL 2015, London in September.

Video super-resolution

Motion compensated video super-resolution is a technique that uses the sub-pixel shifts between multiple low resolution images of the same scene to create higher resolution frames with improved quality. An important concept is that due to the sub-pixel displacements of picture elements in the low resolution frames, it is possible to obtain high frequency content beyond the Nyquist limit of the sampling equipment. Super-resolution algorithms exploit the fact that as objects move in front of the camera sensor, picture elements captured in the camera pixels might not be visible in the next frame if the movement of the element does not extend to the next pixel. Super-resolution algorithms track and position these additional picture elements in the high-resolution frame. The resulting video quality is significantly improved compared with techniques that only exploit the information in one low-resolution frame to create one high resolution frame.

Super-Resolution techniques can be applied to many areas, including intelligent personal identification, medical imaging, security, surveillance and can be of special interest in applications that demand low-power and low-cost sensors. The key idea is that increasing the pixel size improves the signal to noise ratio and reduces the cost and power of the sensor.  Larger pixels enable more light to be collected and in addition the blur introduced by diffraction is reduced. Diffraction is a bigger issue with smaller pixels, so again sensors with larger pixels will perform better, giving sharper images with higher contrast in the fine details, especially in low-light conditions.

Benefits include that increasing the pixel size means that fewer pixels can be located in the sensor and this reduces the sensor resolution.  The low-resolution sensor needs to process and transmit a lower amount of information which results in lower power and cost.  Super-resolution algorithms running in the receiver side can then be used to recover high-quality and high-resolution videos maintaining a constant frame rate.

Overall, super-resolution enables the system that captures and transmits the video data to be based on low-power and low-cost components while the receiver still obtains a high-quality video stream.

This project has been sponsored by the Centre for Defence Enterprise and DSTL under the Generic Enablers for Low-Size, Weight, Power and Cost (SWAPC) Intelligence, Surveillance, Target Acquisition and Reconnaissance (ISTAR) program.

Click to see some examples :

1:  before  car number plate in and after super-resolution car number plate SR

2:  before vehicles in and after super-resolution vehicles SR

and learn about the theory behind the algorithm:  Chen, J, Nunez-Yanez, JL & Achim, A 2014, ‘Bayesian video super-resolution with heavy-tailed prior models’. IEEE Transactions on Circuits and Systems for Video Technology, vol 24., pp. 905-914

Generic motion based object segmentation for assisted navigation

CASBliP – Computer Aided System for the Blind

casIn the CASBliP project, a robust approach to annotating independently moving objects captured by head mounted stereo cameras that are worn by an ambulatory (and visually impaired) user is proposed. Initially, sparse optical flow is extracted from a single image stream, in tandem with dense depth maps. Then, using the assumption that apparent movement generated by camera egomotion is dominant, flow corresponding to independently moving objects (IMOs) is robustly segmented using MLESAC. Next, the mode depth of the feature points defining this flow (the foreground) are obtained by aligning them with the depth maps. Finally, a bounding box is scaled proportionally to this mode depth and robustly fit to the foreground points such that the number of inliers is maximised. The system runs at around 8 fps and has been tested by visually impaired volunteers.

For more information, see CASBliP – Computer Aided System for the Blind.

Human pose estimation using motion

Ben Daubney, David Gibson, Neill Campbell

Currently we are researching how to extract human pose from a sparse set of moving features. This work is inspired from psychophisical experiments using thehumanpose Moving Light Display (MLD), where it has been shown that a small set of moving points attached to the key joints of a person could convey a wealth of information to an observer about the person being viewed, such as their mood or gender. Unlike the typical MLD’s used in the physchophysics community ours are automatically generated by applying a standard feature tracker to a sequence of images.

The result is a set of features that are far more noisy and unreliable than those tradtionally used. The purpose of this research is to try to better understand how the temporal dimension of a sequence of images can be exploited at a much lower level than currently used to estimate pose.

Analysis of moth camouflage

mothcam

David Gibson, Neill Campbell

A half million pound BBSRC collaboration with Biological sciences and experimental Psychology, the aim of this project is to develop a computational theory of animal camouflage, with models specific to the visual systems of birds and humans. Moths have been chosen for this study as they are a particularly good demonstrators of a wide range of cryptic and disruptive camouflage in nature. Using psychophysically plausible low-level image features, learning algorithms are used to determine the effectiveness of camouflage examples. The ability to generate and process large numbers of camouflage examples enables predictive computational models to be created and compared to the performance of human and bird subjects. Such comparisons will give insights into what aspects of moth camouflage are important for avoiding detection and recognition by birds and humans and thereby, give insight into the mechanisms being employed by bird and human visual systems

Active contours

Majid Mirmehdi, Xianghua Xie, Ronghua Yang

Active contours finding boundaries in the brainActive contour models, commonly known as snakes, have been widely used for object localisation, shape recovery, and visual tracking due to their natural handling of shape variations. The introduction of the Level Set method into snakes has greatly enhanced their potential in real world applications.

Since 2002, we have developed some novel active contour models. The first one aims to bridge (image gradient) boundary based approach and region-based approach. In this work, a level set based geometric snake, enhanced for more tolerance towards weak edges and noise, is introduced. It is based on the principle of the conjunction of the traditional gradient flow forces with new region constraints. We refer to this as the Region-aided Geometric Snake or RAGS. The image gradient provides local information of object boundaries, while the region information offers global definition of boundaries. In this framework, the region constrains can be conveniently customerised and plugged into the snake model.

The second model, called Charged Contour Model (CCM),is a migration of Charged Particle Model (CPM) into the active contour framework. The basic idea is to introduce particle dynamics into contour based deformable models. CCM performs better than CPM in the sense that it guarantees closed contours, i.e. it eliminates the ambiguities in contour reconstruction. Also, CCM is much more efficient. In comparison to geodesic snake, CCM is more robust to weak edges and less sensitive to noise interference.

The third model, CACE (Charged Active Contour model based on Electrostatics), is a further development of the CCM. The snake, implicitly embedded in level sets, propagates under the joint influence of a boundary attraction force and a boundary competition force. Its vector field dynamically adapts by updating itself when a contour reaches a boundary (which differs from CCM). The model is then more invariant to initialisation and possesses better convergence abilities. Analytical and comparative results are presented on synthetic and real images.

MAC model is a result of our most recent effort in developing new active contour models. The proposed external force field that is based on magnetostatics and hypothesized magnetic interactions between the active contour and object boundaries. The major contribution of the method is that the interaction of its forces can greatly improve the active contour in capturing complex geometries and dealing with difficult initializations, weak edges and broken boundaries. The proposed method is shown to achieve significant improvements when compared against six well-known and state-of-the-art shape recovery methods, including the geodesic snake, the generalized version of GVF snake, the combined geodesic and GVF snake, and the charged particle model.

For more information, please see our active contours site.