MyWorld: Visual Computing and Visual Communications Research Internships 2025

About

We are excited to announce that 2x funded summer internships will be available in the summer of 2025, supervised by academics at the Visual Information Lab, University of Bristol. Each intern will work full-time for 7 weeks on cutting-edge research in image and video processing, with support from senior researchers in the group.

These internship projects are supported by MyWorld, a creative technology programme in the UK’s West of England region, funded by £30 million from UK Research and Innovation’s (UKRI) Strength in Places Fund (SIPF).

Eligibility of students and Assessment

To be eligible for a summer internship, students must meet the following criteria:

  • Be a full-time student at the University of Bristol.
  • Be in their second or penultimate year of study (not in their first or final year).
  • Be able to work in person at the University of Bristol during the internship period.
  • Have a strong interest in postgraduate research, particularly in image and video technology.

In line with the University’s commitment to promoting equity and diversity, we particularly welcome and encourage applications from students whose ethnicity, gender, and/or background are currently underrepresented in our postgraduate community.

Students will be assessed on:

  • Academic record
  • Interest in postgraduate research

Project 1

Title: Implicit video compression based on generative models

Description:
This project will leverage various generative models to efficiently represent and compress standard and immersive video signals. Unlike traditional compression techniques, which rely on explicit encoding and decoding processes, this type of approach is expected to learn a compact, latent representation of video content, and then reconstruct high-quality video frames from this compressed representation. This approach aims to achieve better compression ratios while maintaining high visual fidelity, making it particularly promising for applications in video streaming, storage, and real-time communication.

Related works:
[1] Kwan, Ho Man, et al. “HiNeRV: Video compression with hierarchical encoding-based neural representation.”, NeurIPS 2023. [Paper]
[2] Gao, Ge, et al. “PNVC: Towards Practical INR-based Video Compression.”, arXiv:2409.00953, 2024. [Paper]
[3] Blattmann, Andreas, et al. “Align your latents: High-resolution video synthesis with latent diffusion models.”, CVPR 2023. [Paper]

Supervisor:
Please contact Dr. Aaron Zhang (fan.zhang@bristol.ac.uk) for any inquiries.

Project 2

Title: Zero-shot learning for video denoising

Description:
This project aims to develop video denoising through the adoption of zero-shot learning techniques, eliminating the need for conventional noisy-clean training pairs. By leveraging deep learning models that can generalise from unrelated data, the project seeks to develop an innovative denoising framework that can effectively improve video quality under a variety of conditions without prior specific examples. This approach not only promises significant advancements in video processing technology but also extends potential applications in real-time broadcasting, surveillance, and content creation, where optimal video clarity is essential.

Related works:
[1] Y. Mansour and R. Heckel, “Zero-Shot Noise2Noise: Efficient Image Denoising without any Data”, CVPR 2023. [Paper]
[2] Y. Shi, et al., “ZERO-IG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images”, CVPR 2024. [Paper]

Supervisor:
Please contact Dr Pui Anantrasirichai (n.anantrasirichai@bristol.ac.uk) for any inquiries.

Application

  1. Submit your [Application Form] by 31 January 2025.
  2. Shortlisted candidates will be interviewed by 14 February 2025.
  3. Successful students will be notified by 28 February 2025.
  4. Students are provided internship acceptance form to confirm information required by TSS for registration by 14 March 2025.

Payment

Students will be paid the minimum living wage for the duration of the internship (£12.21 per hour in 2025), which equates to approximately £428 (35 hours) per week before any National Insurance or income tax deductions. Please note that payment will be made a month in arrears, meaning students will be paid for the hours worked at the end of each month.

Westminster Policy Forum

We are pleased to announce that Professor Dave Bull has been invited to speak at the Westminster Policy Forum, Next Steps for the Western Gateway

Westminster Social Policy Forum & Policy Forum for Wales keynote seminar: Priorities for economic development and investment in the South Wales and Western England region will be taking place on the 21 June.

This event will evaluate the most recent advancements and future strategies for the South Wales and Western England area. It presents a platform to delve into crucial topics during a period of increased policy attention leading up to the General Election. Attendees will review the progress made thus far and set out priorities for the future to accomplish the goals of the region’s cross-border partnership. These goals include drawing in more private investments into Western England and South Wales, achieving sustainable economic growth, and enhancing coordination and cooperation among local authorities within the region.

Key focuses will include economic development and investment priorities in the South Wales and Western England area. The conference will also explore ways to advance innovation in the region and determine the next steps towards positioning the area as a frontrunner in achieving net zero goals and promoting low carbon energy. The agenda will address the necessary policies and the roles of key stakeholders going forward to facilitate increased financial investments in the region.

Dave Bull, Director of MyWorld will be speaking alongside Director of GW4‘s Joanna Jenkinson, on the theme of Supporting future regional growth, and will be discussing supporting career opportunities, employment and capabilities across the region, examining plans for the UK’s first pan-national Cyber Super Cluster, building upon the region’s digital innovation strengths in AI, 5G, computing and robotics and establishing a coordinated approach and long-term funding for skills development | strengthening
collaboration within the third sector.

IBC 2022 Innovation Award win for UK 5G Edge-XR project

The University of Bristol’s Visual Information Laboratory, part of Bristol Vision Institute is a member of a consortium led by BT Sport that has won the IBC 2022 Innovation Award in the Content Everywhere category. The consortium beat outstanding shortlisted entries from PGA TOUR and SKY Sports. The work resulted from the innovate UK 5G Edge-XR project which can viewed on YouTube

In collaboration with local start-up, Condense Reality, who have developed a volumetric video capture rig, which captures the 3D essence of people in real time and facilitates digital transportation to a different location, the project enables users to watch a holographic view of the live action, replaying the footage via a game engine from any angle in the comfort of their home.

The experience relies on the speed and low latency of 5G connectivity combined with edge of network computing, to move some of processor load associated with volumetric rendering away from the devices,  enabling experiences that have never been seen before.

The University of Bristol’s work in the project, associated with advanced video compression and rendering of volumetric content, led by Andrew Calway, David Bull, Aaron Zhang and Angeliki Katsenou is now being further developed through the MyWorld programme, collaboratively with BT and Condense Reality.

Further information

CondenseSalsa SoundTHE GRID FACTORYUniversity of BristolDANCE EAST

MyWorld PhD Scholarship: Low Light Fusion and Autofocus for Advanced Wildlife Coverage

About the Project

Natural History filmmaking presents many challenges. For example, filming in low light or using modalities such as infra-red can result in noisy, low-resolution images, or can suffer from poor contrast range and colours. Also, the camera sensor is normally operating with high ISO levels and hence has wide aperture and extremely shallow depth of field.

This project will enable production workflow to push the boundaries of what is possible in terms of new image acquisition and processing methods for telling stories of the natural world. The project will look at image-based approaches to understanding and explaining the natural world, for example by combining multiple imaging modalities such as visible and infra-red. It will investigate means of autofocus for low light content using spectral, spatial, or other image processing methods to control the focus action. Machine learning methods to estimate focus from blur after training will also be explored.

Launched in April 2021, MyWorld is a brand-new five-year programme, the flagship for the UK’s creative technology sector, and is part of a UK-wide exploration into devolved research and development funding (UKRI video). Led by the University of Bristol, MyWorld will position the South West as an international trailblazer in screen-based media. This £46m programme will bring together 30 partners from Bristol and Bath’s creative technologies sector and world-leading academic institutions, to create a unique cross-sector consortium. MyWorld will forge dynamic collaborations to progress technological innovation, deliver creative excellence, establish, and operate state of the art facilities, offer skills training and drive inward investment, raising the region’s profile on the global stage. 

URL for further information: http://www.myworld-creates.com/ 

Candidate Requirements

Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree. 

If English is not your first language, you need to meet this profile level:

Profile E

Further information about English language requirements and profile levels.

Basic skills and knowledge required

Essential: Excellent analytical skills and experimental acumen.

Desirable: A background understanding in one or more of the following:

  • Image processing
  • Artificial intelligence/Machine learning/Deep learning
  • Computational Imaging / Computational Photography

Application Process

  • All candidates should submit a full CV and covering letter to myworldrecruitment@myworld-creates.com (FAO: Professor David R. Bull).
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded):  https://www.bristol.ac.uk/study/postgraduate/apply/
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • This post remains open until filled.

For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions sceem-pgr-admissions@bristol.ac.uk


Funding Notes

Stipend at the UKRI minimum stipend level (£16,062 p.a. in 2022/23) will also cover tuition fees at the UK student rate. Funding is subject to eligibility status and confirmation of award.
To be treated as a home student, candidates must meet one of these criteria:

  • be a UK/EU national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.

Analysis of Coral using Deep Learning

Tilo Burghardt, Ainsley Rutterford, Leonardo. Bertini, Erica J. Hendy, Kenneth Johnson, Rebecca Summerfield

Animal biometric systems can also be applied to the remains of living beings – such as coral skeletons. These fascinating structures can be scanned via 3D tomography and made available to computer vision scientists via resulting image stacks.

In this project we investigated the efficacy of deep learning architectures such as U-Net on such image stacks in order to find and measure important features in coral skeletons automatically. One such feature is constituted by growth bands of the colony, which are extracted/approximated by our system and superimposed on a coral slice in the image below. The project provides a first proof-of-concept that machines can, given sufficiently clear samples, perform similarly to humans in many respects when identifying associated growth and calcification rates exposed from skeletal density-banding. This is a first step towards automating banding measurements and related analysis.

Coral skeletal density-banding extracted via Deep Learning

This work was supported by NERC GW4+ Doctoral Training Partnership and is part of 4D-REEF, a Marie Sklodowska-Curie Innovative Training Network funded by European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 813360.

Code Repository at https://github.com/ainsleyrutterford/deep-learning-coral-analysis