Mitigating the effects of atmospheric turbulence on surveillance imagery

Various types of atmospheric distortion can influence the visual quality of video signals during acquisition. Typical distortions include fog or haze which reduce contrast, and atmospheric turbulence due to temperature variations or aerosols. An effect of temperature variation is observed as a change in the interference pattern of the light refraction, causing unclear, unsharp, waving images of the objects. This obviously makes the acquired imagery difficult to interpret.

This project introduced a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to provide accurate detail from objects behind the distorting layer, a simple and efficient frame selection method is proposed to pick informative ROIs from only good-quality frames. We solve the space-variant distortion problem using region-based fusion based on the Dual Tree Complex Wavelet Transform (DT-CWT). We also propose an object alignment method for pre-processing the ROI since this can exhibit significant offsets and distortions between frames. Simple haze removal is used as the final step. We refer to this algorithm as CLEAR (for code please contact me) (Complex waveLEt fusion for Atmospheric tuRbulence). [PDF] [VIDEOS]

Atmospheric distorted videos of static scene

Mirage (256×256 pixels, 50 frames). Left: distorted sequence. Right: restored image. Download PNG

carheathaze

Download other distorted sequences and references [here].

Atmospheric distorted videos of moving object

Left: Distorted video. Right: Restored video. Download PNG

References

  • Atmospheric turbulence mitigation using complex wavelet-based fusion. N. Anantrasirichai, Alin Achim, Nick Kingsbury, and David Bull. IEEE Transactions on Image Processing. [PDF] [Sequences] [Code: please contact me]
  • Mitigating the effects of atmospheric distortion using DT-CWT fusion. N. Anantrasirichai, Alin Achim, David Bull, and Nick Kingsbury. In Proceedings of the IEEE International Conference on Image Processing (ICIP 2012). [PDF] [BibTeX]
  • Mitigating the effects of atmospheric distortion on video imagery : A review. University of Bristol, 2011. [PDF]
  • Mitigating the effects of atmospheric distortion. University of Bristol, 2012. [PDF]

Monitoring Vehicle Occupants

Visual Monitoring of Driver and Passenger Control Panel Interactions

Researchers

Toby Perrett and Prof. Majid Mirmehdi

Overview

Advances in vehicular technology have resulted in more controls being incorporated in cabin designs. We present a system to determine which vehicle occupant is interacting with a control on the centre console when it is activated, enabling the full use of dual-view touchscreens and the removal of duplicate controls. The proposed method relies on a background subtraction algorithm incorporating information from a superpixel segmentation stage. A manifold generated via the diffusion maps process handles the large variation in hand shapes, along with determining which part of the hand interacts with controls for a given gesture. We demonstrate superior results compared to other approaches on a challenging dataset.

Examples

Some example interactions with the dashboard of a Range Rover using near infra-red illumination and a near infra-red pass filter:

Some sample paths through a 3D manifold. The top row of images correspond to a clockwise dial turn. The middle row corresponds to a button press with the index finger, and the bottom shows how finer details such as a thumb extending can be determined:

manifoldpaths3 

References

This work has been accepted for publication in IEEE Transactions on Intelligent Transportation Systems. It is open access and can be downloaded from here:

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7486959

High Frame Rate Video

As the demand for higher quality and more immersive video content increases, the need to extend the current video parameter space of spatial resolutions and display sizes, to include, among other things, a wider colour gamut, higher dynamic range and higher frame rates, becomes ever greater. The use of increased frame rate can provide a more realistic portrayal of a scene through a reduction in motion blur, while also minimizing temporal aliasing, and the associated visual artefacts.

The BVI-HFR video database is the first publicly available high frame rate video database, and contains 22 unique HD video sequences at frame rates up to 120 Hz. Sample frames from some of the video sequences can be seen below:

sparkler hamster catch flowers bobblehead cyclist

 

 

 

 

 

 

 

 

 

Subjective evaluations of 51 participants on the sequences in the BVI-HFR video database have shown a clear relationship between frame rate and perceived quality (MOS), although we do see the effect of diminishing returns. The results also showed that a degree of content dependency exists, for example benefits of higher frame rate material are more likely to be observed in video sequences with high motion speed (i.e. moving camera).

 

subtest

Publications

A STUDY OF SUBJECTIVE VIDEO QUALITY AT VARIOUS FRAME RATES, Mackin, A. and Zhang, F. and Bull, D., Image Processing (ICIP), 2015 22nd IEEE International Conference on, 2015.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

What’s on TV: A Large-Scale Quantitative Characterisation of Modern Broadcast Video Content

Video databases, used for benchmarking and evaluating the performance of new video technologies, should represent the full breadth of consumer video content. The parameterisation of video databases using low-level features has proven to be an effective way of quantifying the diversity within a database. However, without a comprehensive understanding of the importance and relative frequency and of these features in the content people actually consume, the utility of such information is limited. In collaboration with the BBC, the  “What’s on TV” is a large-scale analysis of the low-level features that exist in contemporary broadcast video. The project aims to establish an efficient set of features that can be used to characterise the spatial and temporal variation in modern consumer content. The meaning and relative significance of this feature set, together with the shape of their frequency distributions, represent highly valuable information for researchers wanting to model the diversity of modern consumer content in representative video databases.

Publications:

Felix Mercer Moss, Fan Zhang, Roland Baddeley and David Bull, What’s on TV: A large-scale quantitative characterisation of modern broadcast video content, ICIP 2016.

for_website

Adaptive Resolution Intra Coding

Delivering high resolution video in restricted bandwidth scenarios can be challenging.  Part of the reason for this is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) pictures featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work looks at new adaptive resolution intra coding methods for improving the rate distortion performance of the video codec.

 

B. Hosking, D. Agrafiotis, and D. Bull, “Spatial resampling of IDR frames for low bitrate video coding with HEVC,” IS&T/SPIE Electronic Imaging, San Francisco, Feb 2015.

B. Hosking, D. Agrafiotis, D. Bull, and N. Easton, “AN ADAPTIVE RESOLUTION RATE CONTROL METHOD FOR INTRA CODING IN HEVC,” IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, March 2016.

 

AIC

 

Effect of resampled coding of IDR frame to the quality of a reconstructed B frame in a Group of Pictures (similar bitrate): left resampled coded , middle original, right standard coded

Remote Pulmonary Function Testing using a Depth Sensor

We propose a remote non-invasive approach to Pulmonary Function Testing using a time-of-flight depth sensor (Microsoft Kinect V2), and correlate our results to clinical standard spirometry results. Given point clouds, we approximate 3D models of the subject’s chest, estimate the volume throughout a sequence and construct volume-time and flow-time curves for two prevalent spirometry tests: Forced Vital Capacity and Slow Vital Capacity. From these, we compute clinical measures, such as FVC, FEV1, VC and IC. We correlate automatically extracted measures with clinical spirometry tests on 40 patients in an outpatient hospital setting. These demonstrate high within-test correlations.

 

V. Soleimani, M. Mirmehdi, D. Dame, S. Hannuna, M. Camplani, J. Viner and J. Dodd “Remote pulmonary function testing using a depth sensor,” Biomedical Circuits and Systems Conference (BioCAS), 2015 IEEE, Atlanta, GA, 2015, pp. 1-4.
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7348445&isnumber=7348273