Alex Mackin, Mariana Afonso, Fan Zhang, and David Bull
BVI-SR contains 24 unique video sequences at a range of spatial resolutions up to UHD-1 (3840p). These sequences were used as the basis for a large-scale subjective experiment exploring the relationship between visual quality and spatial resolution when using three distinct spatial adaptation filters (including a CNN-based super-resolution method). The results demonstrate that while spatial resolution has a significant impact on mean opinion scores (MOS), no significant reduction in visual quality between UHD-1 and HD resolutions for the superresolution method is reported. A selection of image quality metrics were benchmarked on the subjective evaluations, and analysis indicates that VIF offers the best performance.
[DOWNLOAD] subjective data, instructions and related file.
[DOWNLOAD] all videos from University of Bristol Research Data Storage Facility.
If this content has been mentioned in a research publication, please give credit to the University of Bristol, by referencing the following paper:
 A. Mackin, M. Afonso, F. Zhang and D. Bull, “A study of subjective video quality at various spatial resolutions”, IEEE ICIP, 2018.
Deep learning methods are increasingly being applied in the optimisation of video compression algorithms and can achieve significantly enhanced coding gains, compared to conventional approaches. Such approaches often employ Convolutional Neural Networks (CNNs) which are trained on databases with relatively limited content coverage. In this paper, a new extensive and representative video database, BVI-DVC, is presented for training CNN-based coding tools. BVI-DVC contains 800 sequences at various spatial resolutions from 270p to 2160p and has been evaluated on ten existing network architectures for four different coding tools. Experimental results show that the database produces significant improvements in terms of coding gains over three existing (commonly used) image/video training databases, for all tested CNN architectures under the same training and evaluation configurations.
[DOWNLOAD] all videos from University of Bristol Research Data Storage Facility.
If this content has been mentioned in a research publication, please give credit to the University of Bristol, by referencing:
The awards will be sponsored by Facebook and Netflix.
Low light scenes often come with acquisition noise, which not only disturbs the viewers, but it also makes video compression challenging. These types of videos are often encountered in cinema as a result of an artistic perspective or the nature of a scene. Other examples include shots of wildlife (e.g. Mobula rays at night in Blue Planet II), concerts and shows, surveillance camera footage and more. Inspired by all the above, we are organising a challenge on encoding low-light captured videos. This challenge intends to identify technology that improves the perceptual quality of compressed low-light videos beyond the current state of the art performance of the most recent coding standards, such as HEVC, AV1, VP9, VVC, etc. Moreover, this will offer a good opportunity for both experts in the fields of video coding and image enhancement to address this problem. A series of subjective tests will be part of the evaluation, the results of which can be used in a study of the tradeoff between artistic direction and the viewers’ preferences, such as mystery movies and some investigation scenes in the film.
Participants will be requested to deliver bitstreams with pre-defined maximum target rates for a given set of sequences, a short report describing their contribution and a software executable for running the proposed methodology and then can reconstruct the decoded videos by the given timeline. Participants are also encouraged to submit a paper for publication in the proceedings, and the best performers shall be prepared to present a summary of the underlying technology during the ICME session. The organisers will cross-validate and perform subjective tests to rank participant contributions.
Expression of interest to participate in the Challenge: 29/11/201910/12/2019
Availability of test sequences for participants: 01/12/2019upon registration
Availability of anchor bitstreams and software package: 15/12/2019
Submission of encoded material: 13/03/202027/03/2020
Submission of Grand Challenge Papers: 13/03/202027/03/2020
Host: The challenge will be organised by Dr. Nantheera Anantrasirichai, Dr. Paul Hill, Dr. Angeliki Katsenou, Ms Alexandra Malyugina, and Dr. Fan Zhang, Visual Information Lab, University of Bristol, UK.
Fan Zhang, Angeliki Katsenou, Mariana Afonso, Goce Dimitrov and David Bull
In this paper, the performance of three state-of-the-art video codecs: High Efficiency Video Coding (HEVC) Test Model (HM), AOMedia Video 1 (AV1) and Versatile Video Coding Test Model (VTM), are evaluated using both objective and subjective quality assessments. Nine source sequences were carefully selected to offer both diversity and representativeness, and different resolution versions were encoded by all three codecs at pre-defined target bitrates. The compression efficiency of the three codecs are evaluated using two commonly used objective quality metrics, PSNR and VMAF. The subjective quality of their reconstructed content is also evaluated through psychophysical experiments. Furthermore, HEVC and AV1 are compared within a dynamic optimization framework (convex hull rate-distortion optimization) across resolutions with a wider bitrate, using both objective and subjective evaluations. Finally the computational complexities of three tested codecs are compared. The subjective assessments indicate that, for the tested versions there is no significant difference between AV1 and HM, while the tested VTM version shows significant enhancements. The selected source sequences, compressed video content and associated subjective data are available online, offering a resource for compression performance evaluation and objective video quality assessment.
Parts of this work have been presented in the IEEE International Conference on Image Processing (ICIP) 2019 in Taipei and in the Alliance for Open Media (AOM) Symposium 2019 in San Francisco.
We present a new video compression framework (ViSTRA2) which exploits adaptation of spatial resolution and effective bit depth, down-sampling these parameters at the encoder based on perceptual criteria, and up-sampling at the decoder using a deep convolution neural network. ViSTRA2 has been integrated with the reference software of both the HEVC (HM 16.20) and VVC (VTM 4.01), and evaluated under the Joint Video Exploration Team Common Test Conditions using the Random Access configuration. Our results show consistent and significant compression gains against HM and VVC based on Bjonegaard Delta measurements, with average BD-rate savings of 12.6% (PSNR) and 19.5% (VMAF) over HM and 5.5% (PSNR) and 8.6% (VMAF) over VTM.
BD-rate results of ViSTRA2 when HM 16.20 was employed as host codec.
BD-rate results of ViSTRA2 when VTM 4.01 was employed as host codec.
This page introduces the work of rate-distortion optimisation using adaptive Lagrange Multipliers. In current standardized hybrid video encoders, the Lagrange multiplier determination model is a key component in rate-distortion optimization. This originated some 20 years ago based on an entropy-constrained high-rate approximation and experimental results obtained using an H.263 reference encoder on limited test material. In this work, we conducted a comprehensive analysis of the results of a Lagrange multiplier selection experiment conducted on various video content using H.264/AVC and HEVC reference encoders. These results show that the original Lagrange multiplier selection methods, employed in both video encoders, are able to achieve optimum rate-distortion performance for I and P frames, but fail to perform well for B frames. The relationship is identified between the optimum Lagrange multipliers for B frames and distortion information obtained from the experimental results, leading to a novel Lagrange multiplier determination approach. The proposed method adaptively predicts the optimum Lagrange multiplier for B frames based on the distortion statistics of recent reconstructed frames. After integration into both H.264/AVC and HEVC reference encoders, this approach was evaluated on 36 test sequences with various resolutions and differing content types. The results show consistent bitrate savings for various hierarchical B frame configurations with minimal additional complexity. BD savings average approximately 3% when constant QP values are used for all frames, and 0.5\% when non-zero QP offset values are employed for different B frame hierarchical levels.
 Fan Zhang and David, R. Bull, “Rate-distortion Optimization Using Adaptive Lagrange Multipliers”, IEEE Trans. on CSVT, accepted in 2018.