MyWorld PhD Scholarship: Low Light Fusion and Autofocus for Advanced Wildlife Coverage

About the Project

Natural History filmmaking presents many challenges. For example, filming in low light or using modalities such as infra-red can result in noisy, low-resolution images, or can suffer from poor contrast range and colours. Also, the camera sensor is normally operating with high ISO levels and hence has wide aperture and extremely shallow depth of field.

This project will enable production workflow to push the boundaries of what is possible in terms of new image acquisition and processing methods for telling stories of the natural world. The project will look at image-based approaches to understanding and explaining the natural world, for example by combining multiple imaging modalities such as visible and infra-red. It will investigate means of autofocus for low light content using spectral, spatial, or other image processing methods to control the focus action. Machine learning methods to estimate focus from blur after training will also be explored.

Launched in April 2021, MyWorld is a brand-new five-year programme, the flagship for the UK’s creative technology sector, and is part of a UK-wide exploration into devolved research and development funding (UKRI video). Led by the University of Bristol, MyWorld will position the South West as an international trailblazer in screen-based media. This £46m programme will bring together 30 partners from Bristol and Bath’s creative technologies sector and world-leading academic institutions, to create a unique cross-sector consortium. MyWorld will forge dynamic collaborations to progress technological innovation, deliver creative excellence, establish, and operate state of the art facilities, offer skills training and drive inward investment, raising the region’s profile on the global stage. 

URL for further information: 

Candidate Requirements

Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree. 

If English is not your first language, you need to meet this profile level:

Profile E

Further information about English language requirements and profile levels.

Basic skills and knowledge required

Essential: Excellent analytical skills and experimental acumen.

Desirable: A background understanding in one or more of the following:

  • Image processing
  • Artificial intelligence/Machine learning/Deep learning
  • Computational Imaging / Computational Photography

Application Process

  • All candidates should submit a full CV and covering letter to (FAO: Professor David R. Bull).
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded):
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • This post remains open until filled.

For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions

Funding Notes

Stipend at the UKRI minimum stipend level (£16,062 p.a. in 2022/23) will also cover tuition fees at the UK student rate. Funding is subject to eligibility status and confirmation of award.
To be treated as a home student, candidates must meet one of these criteria:

  • be a UK/EU national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.

Chair in Creative Media Technologies

For further particulars, please visit here.

The University of Bristol is offering the opportunity for outstanding candidates to apply for a strategically important role linked to the newly established £46m MyWorld Strength in Places programme. The Professor in Creative Media Technologies will be expected to contribute to leadership and development of the MyWorld programme, to carry out internationally-leading research and to take an active role in providing high quality and innovative teaching in related areas. 

The research focus of this post is linked to the strategic objectives of MyWorld including:  the development and delivery of new immersive experiences and services, new acquisition formats, mixed and extended realities, advanced production technologies (e.g. Virtual Production), AI based workflows, intelligent post-production / VFX techniques, data driven visual analytics and interactivity, metaverse-related  technologies, user experience evaluation, enhanced remote access to live experiences and energy efficient media delivery solutions for future (e.g.volumetric) formats. The appointee will be affiliated to the Visual Information Laboratory (VI-Lab) in Bristol Vision Institute (BVI). 

Suitable candidates will be recognised internationally for their contributions in the field of science or technology relevant to the creative industries, with an outstanding research track record including but not limited to the areas listed above. They will have excellent project management and research leadership skills, a strong track record of attracting significant research income and experience of delivering major research projects. They will be innovative and collaborative with experience of working with partners in the creative sector and across disciplines in academia and industry.  We anticipate that candidates will possess a good honours degree in Electrical/Electronic Engineering, Computer Science or a related discipline along with a relevant PhD, or extensive relevant industrial experience. 

Deadline 20 April 2022

ICME2020 Grand Challenge: Encoding in the Dark

source: El Fuente test sequence, Netflix


The awards will be sponsored by Facebook and Netflix.


Low light scenes often come with acquisition noise, which not only disturbs the viewers, but it also makes video compression challenging. These types of videos are often encountered in cinema as a result of an artistic perspective or the nature of a scene. Other examples include shots of wildlife (e.g. Mobula rays at night in Blue Planet II), concerts and shows, surveillance camera footage and more. Inspired by all the above, we are organising a challenge on encoding low-light captured videos. This challenge intends to identify technology that improves the perceptual quality of compressed low-light videos beyond the current state of the art performance of the most recent coding standards, such as HEVC, AV1, VP9, VVC, etc. Moreover, this will offer a good opportunity for both experts in the fields of video coding and image enhancement to address this problem. A series of subjective tests will be part of the evaluation, the results of which can be used in a study of the tradeoff between artistic direction and the viewers’ preferences, such as mystery movies and some investigation scenes in the film.

Participants will be requested to deliver bitstreams with pre-defined maximum target rates for a given set of sequences, a short report describing their contribution and a software executable for running the proposed methodology and then can reconstruct the decoded videos by the given timeline. Participants are also encouraged to submit a paper for publication in the proceedings, and the best performers shall be prepared to present a summary of the underlying technology during the ICME session. The organisers will cross-validate and perform subjective tests to rank participant contributions.

Please, find here:


Important Dates:

    • Expression of interest to participate in the Challenge: 29/11/2019 10/12/2019
    • Availability of test sequences for participants: 01/12/2019 upon registration
    • Availability of anchor bitstreams and software package: 15/12/2019
    • Submission of encoded material: 13/03/2020 27/03/2020
    • Submission of Grand Challenge Papers: 13/03/2020 27/03/2020

Host: The challenge will be organised by Dr. Nantheera Anantrasirichai, Dr. Paul Hill, Dr. Angeliki Katsenou, Ms Alexandra Malyugina, and Dr. Fan Zhang, Visual Information Lab, University of Bristol, UK.