The demand for increased qualities and quantities of visual content is increasing daily and this is creating a major tension between available network capacity and required video bit rate. Network operators, content creators and service providers all need to transmit the highest quality video at the lowest bit rate and this can only be achieved through the exploitation of perceptual redundancy to enable video compression.VI-Lab has worked with key partners across industry and academia over many years, creating many state of the art innovations in image and video compression.
The primary advances that have been addressed in VI-Lab include:
More immersive applications and format extensions: The video parameter space has been extended to support the capture, processing and distribution of formats such as HDR and UHDTV as well as VR, AR and volumetric content. All of these provide increased challenges for video compression with increased sensitivities to coding artifacts that demand even greater rate-quality performance from the video codec.VI-Lab is working with partners to develop the next generation of volumetric compression systems for these formats.
The pervasion of machine (deep) learning: Work in VI-Lab has demonstrated the potential of deep learning for enhancing video compression performance. Machine learning algorithms have been used create state of the at performance by optimising existing compression tools, to create new coding architectures, to jointly optimise compression and classification, and to enhance quality metric performance.
Adaptive video streaming over content delivery networks: Approximately 80% of all internet traffic is now video. This content must be packaged, adapted and delivered to achieve the best rate-quality performance for consumers. Major innovations and standards have emerged in recent years to support Dynamic Adaptive Streaming and VI-Lab has made significant contributions to the optimisation of these through content aware dynamic optimisation.
The emergence of new video coding standardization initiatives: New video coding standards from organisations such as MPEG, ITU and the Alliance of Open Media introduce new tools and attributes. VI-Lab has worked with various partners to compare and optimise these.
New and extended databases: In recent years, an increased need has emerged for larger and more comprehensive data sets for training and comparing compression technologies. This is particularly true when Deep Learning methods are employed. VI-Lab has authored many important databases that have been widely used across our community and in the development of video coding statndards. These include BVI- Homtex, BVI-HD, BVI-HFR, BVI-DVC and BVI-Syntex.
Please follow the links below to find out more about our work:
- Content Gnostic Optimisation for Adaptive Streaming
- Understanding video textures – a basis for video compression (BVI-HOMTEX)
- ViSTRA – deep video compression
- Post Processing and In-loop Filtering
- Rate Quality Optimisation – Lagrangian Optimisation
- Codec comparisons
- 5G Edge XR
- BVI datasets
- A training database for deep video compression (BVI-DVC)
- Codec evaluation using synthetic datasets (BVI-SYNTEX)
- Learning-Optimal Video Compression