University of Bristol
  • provision logo

About Us

The Visual Information Laboratory of the University of Bristol exists to undertake innovative, collaborative and interdisciplinary research resulting in world leading technology in the areas of computer vision, image and video communications, content analysis and distributed sensor systems. VI-Lab was formed in 2010, merging the two well established research groups, Signal Processing (EEEng) and Computer Vision (CS). The two constituent groups offer shared and complementary strengths and, with a history of successful collaboration since 1993, their merger has created one of the largest groupings of its type in the UK.

A SIMULATION ENVIRONMENT FOR DRONE CINEMATOGRAPHY

Fan Zhang, David Hall, Tao Xu, Stephen Boyle and David Bull ABSTRACT Simulations of drone camera platforms based on actual environments have been identified as being useful for shot planning, training and rehearsal for both…

BVI-SR: A STUDY OF SUBJECTIVE VIDEO QUALITY AT VARIOUS SPATIAL RESOLUTIONS

Alex Mackin, Mariana Afonso, Fan Zhang, and David Bull ABSTRACT BVI-SR contains 24 unique video sequences at a range of spatial resolutions up to UHD-1 (3840p). These sequences were used as the basis for a…

BVI-DVC: A Training Database for Deep Video Compression

Di Ma, Fan Zhang and David Bull ABSTRACT Deep learning methods are increasingly being applied in the optimisation of video compression algorithms and can achieve significantly enhanced coding gains, compared to conventional approaches. Such approaches…

ICME2020 Grand Challenge: Encoding in the Dark

Sponsors The awards will be sponsored by Facebook and Netflix.   Low light scenes often come with acquisition noise, which not only disturbs the viewers, but it also makes video compression challenging. These types of…

Comparing VVC, HEVC and AV1 using Objective and Subjective Assessments

Fan Zhang, Angeliki Katsenou, Mariana Afonso, Goce Dimitrov and David Bull ABSTRACT In this paper, the performance of three state-of-the-art video codecs: High Efficiency Video Coding (HEVC) Test Model (HM), AOMedia Video 1 (AV1) and…

Evaluating viewing experience of drone videos

Stephen Boyle, Fan Zhang and David Bull This page will contain a download link to the test content and validation results for the paper accepted by IEEE ICIP 2019.

ViSTRA: Video Compression based on Resolution Adaptation

Fan Zhang, Mariana Afonso and David Bull ABSTRACT We present a new video compression framework (ViSTRA2) which exploits adaptation of spatial resolution and effective bit depth, down-sampling these parameters at the encoder based on perceptual…

Rate-distortion Optimization Using Adaptive Lagrange Multipliers

Fan Zhang and David Bull ABSTRACT This page introduces the work of rate-distortion optimisation using adaptive Lagrange Multipliers. In current standardized hybrid video encoders, the Lagrange multiplier determination model is a key component in rate-distortion optimization….

FRQM: A Frame Rate Dependent Video Quality Metric

Fan Zhang, Alex Mackin and David Bull ABSTRACT This page introduces the work of an objective quality metric (FRQM), which characterises the relationship between variations in frame rate and perceptual video quality. The proposed method…

VI-Lab PhD Research Opportunities

Funded Opportunities VI-Lab currently has fully funded PhD opportunities in the following areas: _________________________________________________________________________ Title: Using optical coherence imaging to determine visual prognosis in neurological disease   Supervisor: Professor Alin Achim and Dr Denize Atan (Bristol…

Next Page »