IEEE PCS 2024 Learned Image and Video Coding: Hype or Hope

Professor David Bull has been Invited to be a Plenary Panel member for the :“Learned Image and Video Coding: Hype or Hope Symposium panel.

The Picture Coding Symposium (PCS) is an international forum devoted to advances in visual data coding. Established in 1969, it has the longest history of any conference in this area. The 37th event in the series, PCS 2024, will convene in Taichung, Taiwan. Taichung is centrally located in the western half of Taiwan. It is known as an art and cultural hub of Taiwan, being home to many national museums, historic attractions, art installations, and design boutiques.

Aswell as Dave Bull, other Panelists include:

Prof. Dr.-Ing. Joern Ostermann, member of the Senat of Leibniz Universität Hannover.

Prof. Dr.-Ing. Jens-Rainer Ohm. Chair of the Institute of Communication Engineering at RWTH Aachen University, Germany

Prof. C.-C. Jay Kuo. Ming Hsieh Chair in Electrical and Computer Engineering-Systems at the University of Southern California.

Dr. Yu-Wen Huang. Deputy director of the video coding research team in MediaTek.

Dr. Elena Alshina. Chief Video Scientist, Audiovisual Technology Lab Director, Media Codec and Standardization Lab Director in Huawei Technologies.

MyWorld /VI-Lab Team awarded 1st prize in an international competition on visual quality assessment.

We are pleased to share that our team BVI_VQA has been awarded 1st place in the Video Perception track of the 6th Challenge on Learned Image Compression.

 
CLIC 2024, was the 6th Challenge on Learned Image Compression, aimed to advance the field of image and video compression using machine learning and computer vision. It was also an opportunity to evaluate and compare end-to-end trained approaches against classical approaches.
 
This challenge was jointly organized by researchers from companies such as Google, Netflix, Apple, Microsoft and Amazon.
 
Our team was ranked the 1st place in the Video Perception leaderboard.
Certificate for video perception

Immersive Futures Lab at SXSW 2024

The Immersive Futures Lab at SXSW 2024
10-12 March, 5th Floor, Fairmont Hotel, Austin TX

South by Southwest (SXSW) is an annual conglomeration of parallel film, interactive media, and music festivals and conferences organized jointly that takes place in mid-March in Austin, Texas, United States.

This year, My World Delegates exhibited work as part of the Immersive futures lab in the festival. The SXSW showcase included live demos and prototypes with representatives from XR development programmes: MyWorld (UK) and Xn Québec (Canada)

The Lab at SXSW 2024 brought cutting-edge projects supported by MyWorld (UK) and Xn Québec (Canada), giving their creators a platform to put exciting prototypes into the hands of new audiences and tell the stories behind their work.

The MyWorld deligation included the following projects:

studio 5 project

Studio 5

Game Concious Dialogue

Inside Mental Health

Lux Aeterna

MyWorld Scholarship: Deep Video Coding

About the Project

Video technology is now pervasive, with mobile video, UHDTV, video conferencing, and surveillance all underpinned by efficient signal representations. As one of the most important research topics in video processing, compression is crucial in encoding high quality videos for transmission over band-limited channels.

The last three decades have seen impressive performance improvement in standardised video algorithms. The latest standard, VVC and the new royalty free codec, AOM/AV1, are expected to achieve 30-50% gains in coding performance over HEVC. However, this figure is far from satisfactory considering the large amount video data consumed every day.

Inspired by the recent breakthrough in artificial intelligence, in particular deep learning techniques developed for video processing applications, this PhD project will investigate novel deep learning-based video coding tools, network architectures and perceptual loss functions for modern codecs.

This project is funded by MyWorld UKRI Strength in Places Programme.

URL for further information: http://www.myworld-creates.com/ 

Candidate Requirements

Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree. 

If English is not your first language, you need to meet this profile level:

Profile E

Further information about English language requirements and profile levels.

Basic skills and knowledge required

Essential: Excellent analytical skills and experimental acumen.

Desirable: A background understanding in one or more of the following:

Video compression

Artificial intelligence / Machine Learning / Deep Learning

Application Process

  • All candidates should submit a full CV and covering letter to myworldrecruitment@myworld-creates.com (FAO: Professor David R. Bull).
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded): https://www.bristol.ac.uk/study/postgraduate/apply/
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • This post remains open until fulfilled.

For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions sceem-pgr-admissions@bristol.ac.uk

Funding Notes

Stipend at the UKRI minimum stipend level (£16,062 p.a. in 2022/23) will also cover tuition fees at the UK student rate. Funding is subject to eligibility status and confirmation of award.


To be treated as a home student, candidates must meet one of these criteria:

  • be a UK national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.

MyWorld PhD Scholarship: Volumetric Video Compression Compression

About the Project

Among all video content, one of the areas that has grown significantly over recent years is based on the use of augmented and virtual reality (AR and VR) technologies. They have the potential for major growth, and developments in displays, interactive equipment, mobile networks, edge computing, and compression are likely to facilitate these in the coming years.

A key new format that underpins the development of these new technologies is referred to volumetric video, with commonly used formats including point clouds, multi-view + depth and equirectangular representations. Various volumetric video codecs have been developed to perform data compression for transmission or storage of these formats. To present/display volumetric video content, the compressed data is decoded and post-processed using synthesizer/renderer which enables 3DoF/6DoF viewing capabilities on AR or VR devices.

In this context, this 3.5 year PhD project will focus on the two essential stages within this workflow: volumetric video compression and post-processing. Inspired by recent advances in deep video compression and rendering, we will research novel AI-based production workflows for volumetric video content to significantly improve the coding efficiency and the perceptual quality of the final rendered content.

This project is funded by the MyWorld UKRI Strength in Places Programme at the University of Bristol. It fits well within one of the core research areas outlined in the MyWorld programme on video production and communications for immersive content. The student working on this project will gain experience on immersive video production workflows, from capture and contribution to live editorial production and delivery at scale to a growing variety of XR capable devices.

URL for further information: http://www.myworld-creates.com/ 

Candidate Requirements

Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree. 

If English is not your first language, you need to meet this profile level:

Profile E

Further information about English language requirements and profile levels.

Basic skills and knowledge required

Essential: Excellent analytical skills and experimental acumen.

Desirable: A background understanding in one or more of the following:

Video compression

3D Computer vision

Artificial intelligence / Machine Learning / Deep Learning

Application Process

  • All candidates should submit a full CV and covering letter to myworldrecruitment@myworld-creates.com (FAO: Professor David R. Bull).
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded): https://www.bristol.ac.uk/study/postgraduate/apply/
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • This post remains open until fulfilled.

For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions sceem-pgr-admissions@bristol.ac.uk

Funding Notes

Stipend at the UKRI minimum stipend level (£16,062 p.a. in 2022/23) will also cover tuition fees at the UK student rate and an industrial top-up. Funding is subject to eligibility status and confirmation of award.
To be treated as a home student, candidates must meet one of these criteria:

  • be a UK national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.

MyWorld Postdoctoral Research Associate Posts – UKRI Strength in Places Programme

The Role

The newly established MyWorld research programme, led by the University of Bristol, is a flagship five-year, £46m R&D programme collaborating with numerous industrial and academic organisations. The MyWorld Creative Technologies Hub is now expanding in line with its mission to grow the West of England’s Creative Industries Cluster with major investments in new facilities and staff at all levels.

We are now offering unique opportunities for four Post-Doctoral Research Associates in

  1. AI methods for Video Post-Production
  2. Robot Vision for Creative Technologies
  3. Perceptually Optimised Video Compression (sponsored by our collaborator, Netflix, in Los Gatos, USA).
  4. Visual Communications

Contract and Salary

All these four posts are based in the Faculty of Engineering, University of Bristol, and the salary range is Grade I = £34,304 – £38,587 per annum or Grade J = £38,587 – £43,434 per annum.

Application Information

We anticipate that candidates will possess a good honours degree along with a PhD in related disciplines, or extensive relevant industrial/commercial experience. We expect a high standard of spoken and written English and the ability to work effectively both independently and as part of a team.

Please following the link provided for each post to access detailed job description and the application system.