MyWorld: Visual Computing and Visual Communications Research Internships 2025

About

We are excited to announce that 2x funded summer internships will be available in the summer of 2025, supervised by academics at the Visual Information Lab, University of Bristol. Each intern will work full-time for 7 weeks on cutting-edge research in image and video processing, with support from senior researchers in the group.

These internship projects are supported by MyWorld, a creative technology programme in the UK’s West of England region, funded by £30 million from UK Research and Innovation’s (UKRI) Strength in Places Fund (SIPF).

Eligibility of students and Assessment

To be eligible for a summer internship, students must meet the following criteria:

  • Be a full-time student at the University of Bristol.
  • Be in their second or penultimate year of study (not in their first or final year).
  • Be able to work in person at the University of Bristol during the internship period.
  • Have a strong interest in postgraduate research, particularly in image and video technology.

In line with the University’s commitment to promoting equity and diversity, we particularly welcome and encourage applications from students whose ethnicity, gender, and/or background are currently underrepresented in our postgraduate community.

Students will be assessed on:

  • Academic record
  • Interest in postgraduate research

Project 1

Title: Implicit video compression based on generative models

Description:
This project will leverage various generative models to efficiently represent and compress standard and immersive video signals. Unlike traditional compression techniques, which rely on explicit encoding and decoding processes, this type of approach is expected to learn a compact, latent representation of video content, and then reconstruct high-quality video frames from this compressed representation. This approach aims to achieve better compression ratios while maintaining high visual fidelity, making it particularly promising for applications in video streaming, storage, and real-time communication.

Related works:
[1] Kwan, Ho Man, et al. “HiNeRV: Video compression with hierarchical encoding-based neural representation.”, NeurIPS 2023. [Paper]
[2] Gao, Ge, et al. “PNVC: Towards Practical INR-based Video Compression.”, arXiv:2409.00953, 2024. [Paper]
[3] Blattmann, Andreas, et al. “Align your latents: High-resolution video synthesis with latent diffusion models.”, CVPR 2023. [Paper]

Supervisor:
Please contact Dr. Aaron Zhang (fan.zhang@bristol.ac.uk) for any inquiries.

Project 2

Title: Zero-shot learning for video denoising

Description:
This project aims to develop video denoising through the adoption of zero-shot learning techniques, eliminating the need for conventional noisy-clean training pairs. By leveraging deep learning models that can generalise from unrelated data, the project seeks to develop an innovative denoising framework that can effectively improve video quality under a variety of conditions without prior specific examples. This approach not only promises significant advancements in video processing technology but also extends potential applications in real-time broadcasting, surveillance, and content creation, where optimal video clarity is essential.

Related works:
[1] Y. Mansour and R. Heckel, “Zero-Shot Noise2Noise: Efficient Image Denoising without any Data”, CVPR 2023. [Paper]
[2] Y. Shi, et al., “ZERO-IG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images”, CVPR 2024. [Paper]

Supervisor:
Please contact Dr Pui Anantrasirichai (n.anantrasirichai@bristol.ac.uk) for any inquiries.

Application

  1. Submit your [Application Form] by 31 January 2025.
  2. Shortlisted candidates will be interviewed by 14 February 2025.
  3. Successful students will be notified by 28 February 2025.
  4. Students are provided internship acceptance form to confirm information required by TSS for registration by 14 March 2025.

Payment

Students will be paid the minimum living wage for the duration of the internship (£12.21 per hour in 2025), which equates to approximately £428 (35 hours) per week before any National Insurance or income tax deductions. Please note that payment will be made a month in arrears, meaning students will be paid for the hours worked at the end of each month.

IEEE PCS 2024 Learned Image and Video Coding: Hype or Hope

Professor David Bull has been Invited to be a Plenary Panel member for the :“Learned Image and Video Coding: Hype or Hope Symposium panel.

The Picture Coding Symposium (PCS) is an international forum devoted to advances in visual data coding. Established in 1969, it has the longest history of any conference in this area. The 37th event in the series, PCS 2024, will convene in Taichung, Taiwan. Taichung is centrally located in the western half of Taiwan. It is known as an art and cultural hub of Taiwan, being home to many national museums, historic attractions, art installations, and design boutiques.

Aswell as Dave Bull, other Panelists include:

Prof. Dr.-Ing. Joern Ostermann, member of the Senat of Leibniz Universität Hannover.

Prof. Dr.-Ing. Jens-Rainer Ohm. Chair of the Institute of Communication Engineering at RWTH Aachen University, Germany

Prof. C.-C. Jay Kuo. Ming Hsieh Chair in Electrical and Computer Engineering-Systems at the University of Southern California.

Dr. Yu-Wen Huang. Deputy director of the video coding research team in MediaTek.

Dr. Elena Alshina. Chief Video Scientist, Audiovisual Technology Lab Director, Media Codec and Standardization Lab Director in Huawei Technologies.

MyWorld /VI-Lab Team awarded 1st prize in an international competition on visual quality assessment.

We are pleased to share that our team BVI_VQA has been awarded 1st place in the Video Perception track of the 6th Challenge on Learned Image Compression.

 
CLIC 2024, was the 6th Challenge on Learned Image Compression, aimed to advance the field of image and video compression using machine learning and computer vision. It was also an opportunity to evaluate and compare end-to-end trained approaches against classical approaches.
 
This challenge was jointly organized by researchers from companies such as Google, Netflix, Apple, Microsoft and Amazon.
 
Our team was ranked the 1st place in the Video Perception leaderboard.
Certificate for video perception

Immersive Futures Lab at SXSW 2024

The Immersive Futures Lab at SXSW 2024
10-12 March, 5th Floor, Fairmont Hotel, Austin TX

South by Southwest (SXSW) is an annual conglomeration of parallel film, interactive media, and music festivals and conferences organized jointly that takes place in mid-March in Austin, Texas, United States.

This year, My World Delegates exhibited work as part of the Immersive futures lab in the festival. The SXSW showcase included live demos and prototypes with representatives from XR development programmes: MyWorld (UK) and Xn Québec (Canada)

The Lab at SXSW 2024 brought cutting-edge projects supported by MyWorld (UK) and Xn Québec (Canada), giving their creators a platform to put exciting prototypes into the hands of new audiences and tell the stories behind their work.

The MyWorld deligation included the following projects:

studio 5 project

Studio 5

Game Concious Dialogue

Inside Mental Health

Lux Aeterna

MyWorld Scholarship: Deep Video Coding

About the Project

Video technology is now pervasive, with mobile video, UHDTV, video conferencing, and surveillance all underpinned by efficient signal representations. As one of the most important research topics in video processing, compression is crucial in encoding high quality videos for transmission over band-limited channels.

The last three decades have seen impressive performance improvement in standardised video algorithms. The latest standard, VVC and the new royalty free codec, AOM/AV1, are expected to achieve 30-50% gains in coding performance over HEVC. However, this figure is far from satisfactory considering the large amount video data consumed every day.

Inspired by the recent breakthrough in artificial intelligence, in particular deep learning techniques developed for video processing applications, this PhD project will investigate novel deep learning-based video coding tools, network architectures and perceptual loss functions for modern codecs.

This project is funded by MyWorld UKRI Strength in Places Programme.

URL for further information: http://www.myworld-creates.com/ 

Candidate Requirements

Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree. 

If English is not your first language, you need to meet this profile level:

Profile E

Further information about English language requirements and profile levels.

Basic skills and knowledge required

Essential: Excellent analytical skills and experimental acumen.

Desirable: A background understanding in one or more of the following:

Video compression

Artificial intelligence / Machine Learning / Deep Learning

Application Process

  • All candidates should submit a full CV and covering letter to myworldrecruitment@myworld-creates.com (FAO: Professor David R. Bull).
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded): https://www.bristol.ac.uk/study/postgraduate/apply/
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • This post remains open until fulfilled.

For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions sceem-pgr-admissions@bristol.ac.uk

Funding Notes

Stipend at the UKRI minimum stipend level (£16,062 p.a. in 2022/23) will also cover tuition fees at the UK student rate. Funding is subject to eligibility status and confirmation of award.


To be treated as a home student, candidates must meet one of these criteria:

  • be a UK national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.

MyWorld PhD Scholarship: Volumetric Video Compression Compression

About the Project

Among all video content, one of the areas that has grown significantly over recent years is based on the use of augmented and virtual reality (AR and VR) technologies. They have the potential for major growth, and developments in displays, interactive equipment, mobile networks, edge computing, and compression are likely to facilitate these in the coming years.

A key new format that underpins the development of these new technologies is referred to volumetric video, with commonly used formats including point clouds, multi-view + depth and equirectangular representations. Various volumetric video codecs have been developed to perform data compression for transmission or storage of these formats. To present/display volumetric video content, the compressed data is decoded and post-processed using synthesizer/renderer which enables 3DoF/6DoF viewing capabilities on AR or VR devices.

In this context, this 3.5 year PhD project will focus on the two essential stages within this workflow: volumetric video compression and post-processing. Inspired by recent advances in deep video compression and rendering, we will research novel AI-based production workflows for volumetric video content to significantly improve the coding efficiency and the perceptual quality of the final rendered content.

This project is funded by the MyWorld UKRI Strength in Places Programme at the University of Bristol. It fits well within one of the core research areas outlined in the MyWorld programme on video production and communications for immersive content. The student working on this project will gain experience on immersive video production workflows, from capture and contribution to live editorial production and delivery at scale to a growing variety of XR capable devices.

URL for further information: http://www.myworld-creates.com/ 

Candidate Requirements

Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree. 

If English is not your first language, you need to meet this profile level:

Profile E

Further information about English language requirements and profile levels.

Basic skills and knowledge required

Essential: Excellent analytical skills and experimental acumen.

Desirable: A background understanding in one or more of the following:

Video compression

3D Computer vision

Artificial intelligence / Machine Learning / Deep Learning

Application Process

  • All candidates should submit a full CV and covering letter to myworldrecruitment@myworld-creates.com (FAO: Professor David R. Bull).
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded): https://www.bristol.ac.uk/study/postgraduate/apply/
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • This post remains open until fulfilled.

For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions sceem-pgr-admissions@bristol.ac.uk

Funding Notes

Stipend at the UKRI minimum stipend level (£16,062 p.a. in 2022/23) will also cover tuition fees at the UK student rate and an industrial top-up. Funding is subject to eligibility status and confirmation of award.
To be treated as a home student, candidates must meet one of these criteria:

  • be a UK national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.