Comparing VVC, HEVC and AV1 using Objective and Subjective Assessments

Fan Zhang, Angeliki Katsenou, Mariana Afonso, Goce Dimitrov and David Bull

ABSTRACT

In this paper, the performance of three state-of-the-art video codecs: High Efficiency Video Coding (HEVC) Test Model (HM), AOMedia Video 1 (AV1) and Versatile Video Coding Test Model (VTM), are evaluated using both objective and subjective quality assessments. Nine source sequences were carefully selected to offer both diversity and representativeness, and different resolution versions were encoded by all three codecs at pre-defined target bitrates. The compression efficiency of the three codecs are evaluated using two commonly used objective quality metrics, PSNR and VMAF. The subjective quality of their reconstructed content is also evaluated through psychophysical experiments. Furthermore, HEVC and AV1 are compared within a dynamic optimization framework (convex hull rate-distortion optimization) across resolutions with a wider bitrate, using both objective and subjective evaluations. Finally the computational complexities of three tested codecs are compared. The subjective assessments indicate that, for the tested versions there is no significant difference between AV1 and HM, while the tested VTM version shows significant enhancements. The selected source sequences, compressed video content and associated subjective data are available online, offering a resource for compression performance evaluation and objective video quality assessment.

Parts of this work have been presented in the IEEE International Conference on Image Processing (ICIP) 2019 in Taipei and in the Alliance for Open Media (AOM) Symposium 2019 in San Francisco.

SOURCE SEQUENCES

DATABASE

[DOWNLOAD] subjective data.

[DOWNLOAD] all videos from University of Bristol Research Data Storage Facility.

If this content has been mentioned in a research publication, please give credit to the University of Bristol, by referencing the following paper:

[1] A. V. Katsenou, F. Zhang, M. Afonso and D. R. Bull, “A Subjective Comparison of AV1 and HEVC for Adaptive Video Streaming,” 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 2019, pp. 4145-4149.

[2] F. Zhang, A. V. Katsenou, M. Afonso, Goce Dimitrov and D. R. Bull, “Comparing VVC, HEVC and AV1 using Objective and Subjective Assessments”, arXiv:2003. 10282 [eess.IV], 2020.

ViSTRA: Video Compression based on Resolution Adaptation

Fan Zhang, Mariana Afonso and David Bull

ABSTRACT

We present a new video compression framework (ViSTRA2) which exploits adaptation of spatial resolution and effective bit depth, down-sampling these parameters at the encoder based on perceptual criteria, and up-sampling at the decoder using a deep convolution neural network. ViSTRA2 has been integrated with the reference software of both the HEVC (HM 16.20) and VVC (VTM 4.01), and evaluated under the Joint Video Exploration Team Common Test Conditions using the Random Access configuration. Our results show consistent and significant compression gains against HM and VVC based on Bjonegaard Delta measurements, with average BD-rate savings of 12.6% (PSNR) and 19.5% (VMAF) over HM and 5.5% (PSNR) and 8.6% (VMAF) over VTM.

PROPOSED ALGORITHM

RESULTS

BD-rate results of ViSTRA2 when HM 16.20 was employed as host codec.

BD-rate results of ViSTRA2 when VTM 4.01 was employed as host codec.

REFERENCE

[1] F. Zhang, M. Afonso and D. R. Bull, ViSTRA2: Video Coding using Spatial Resolution and Effective Bit Depth Adaptation. arXiv preprint arXiv:1911.02833.

[2] M. Afonso, F. Zhang and D. R. Bull, Video Compression based on Spatio-Temporal Resolution Adaptation. IEEE T-CSVT (Letter), 2019.

[3] M. Afonso, F. Zhang, A. Katsenou, D. Agrafiotis, D. Bull, Low Complexity Video Coding Based on Spatial Resolution Adaptation, ICIP, 2017.

Rate-distortion Optimization Using Adaptive Lagrange Multipliers

Fan Zhang and David Bull

ABSTRACT

This page introduces the work of rate-distortion optimisation using adaptive Lagrange Multipliers. In current standardized hybrid video encoders, the Lagrange multiplier determination model is a key component in rate-distortion optimization. This originated some 20 years ago based on an entropy-constrained high-rate approximation and experimental results obtained using an H.263 reference encoder on limited test material. In this work, we conducted a comprehensive analysis of the results of a Lagrange multiplier selection experiment conducted on various video content using H.264/AVC and HEVC reference encoders. These results show that the original Lagrange multiplier selection methods, employed in both video encoders, are able to achieve optimum rate-distortion performance for I and P frames, but fail to perform well for B frames. The relationship is identified between the optimum Lagrange multipliers for B frames and distortion information obtained from the experimental results, leading to a novel Lagrange multiplier determination approach. The proposed method adaptively predicts the optimum Lagrange multiplier for B frames based on the distortion statistics of recent reconstructed frames. After integration into both H.264/AVC and HEVC reference encoders, this approach was evaluated on 36 test sequences with various resolutions and differing content types. The results show consistent bitrate savings for various hierarchical B frame configurations with minimal additional complexity. BD savings average approximately 3% when constant QP values are used for all frames, and 0.5\% when non-zero QP offset values are employed for different B frame hierarchical levels.

 

REFERENCE

[1] Fan Zhang and David, R. Bull, “Rate-distortion Optimization Using Adaptive Lagrange Multipliers”, IEEE Trans. on CSVT, accepted in 2018.

[2] F. Zhang and D. Bull, “An Adaptive Lagrange Multiplier Determination Method for Rate-distortion Optimisation in Hybrid Video Codecs”. IEEE ICIP, 2015.

FRQM: A Frame Rate Dependent Video Quality Metric

Fan Zhang, Alex Mackin and David Bull

ABSTRACT

This page introduces the work of an objective quality metric (FRQM), which characterises the relationship between variations in frame rate and perceptual video quality. The proposed method estimates the relative quality of a low frame rate video with respect to its higher frame rate counterpart, through temporal wavelet decomposition, subband combination and spatiotemporal pooling. FRQM was tested alongside six commonly used quality metrics (two of which explicitly relate frame rate variation to perceptual quality), on the publicly available BVI-HFR video database, that spans a diverse range of scenes and frame rates, up to 120fps. Results show that FRQM offers significant improvement over all other tested quality assessment methods with relatively low complexity.

PROPOSED ALGORITHM

SOURCE CODE DOWNLOAD

[DOWNLOAD] Matlab code

REFERENCE

[1] Fan Zhang, Alex Mackin, and David, R. Bull, “A Frame Rate Dependent Video Quality Metric based on Temporal Wavelet Decomposition and Spatiotemporal Pooling. “, IEEE ICIP, 2017.