MyWorld Scholarship: Deep Video Coding

About the Project

Video technology is now pervasive, with mobile video, UHDTV, video conferencing, and surveillance all underpinned by efficient signal representations. As one of the most important research topics in video processing, compression is crucial in encoding high quality videos for transmission over band-limited channels.

The last three decades have seen impressive performance improvement in standardised video algorithms. The latest standard, VVC and the new royalty free codec, AOM/AV1, are expected to achieve 30-50% gains in coding performance over HEVC. However, this figure is far from satisfactory considering the large amount video data consumed every day.

Inspired by the recent breakthrough in artificial intelligence, in particular deep learning techniques developed for video processing applications, this PhD project will investigate novel deep learning-based video coding tools, network architectures and perceptual loss functions for modern codecs.

This project is funded by MyWorld UKRI Strength in Places Programme.

URL for further information: http://www.myworld-creates.com/ 

Candidate Requirements

Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree. 

If English is not your first language, you need to meet this profile level:

Profile E

Further information about English language requirements and profile levels.

Basic skills and knowledge required

Essential: Excellent analytical skills and experimental acumen.

Desirable: A background understanding in one or more of the following:

Video compression

Artificial intelligence / Machine Learning / Deep Learning

Application Process

  • All candidates should submit a full CV and covering letter to myworldrecruitment@myworld-creates.com (FAO: Professor David R. Bull).
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded): https://www.bristol.ac.uk/study/postgraduate/apply/
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • This post remains open until fulfilled.

For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions sceem-pgr-admissions@bristol.ac.uk

Funding Notes

Stipend at the UKRI minimum stipend level (£16,062 p.a. in 2022/23) will also cover tuition fees at the UK student rate. Funding is subject to eligibility status and confirmation of award.


To be treated as a home student, candidates must meet one of these criteria:

  • be a UK national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.

MyWorld PhD Scholarship: Volumetric Video Compression Compression

About the Project

Among all video content, one of the areas that has grown significantly over recent years is based on the use of augmented and virtual reality (AR and VR) technologies. They have the potential for major growth, and developments in displays, interactive equipment, mobile networks, edge computing, and compression are likely to facilitate these in the coming years.

A key new format that underpins the development of these new technologies is referred to volumetric video, with commonly used formats including point clouds, multi-view + depth and equirectangular representations. Various volumetric video codecs have been developed to perform data compression for transmission or storage of these formats. To present/display volumetric video content, the compressed data is decoded and post-processed using synthesizer/renderer which enables 3DoF/6DoF viewing capabilities on AR or VR devices.

In this context, this 3.5 year PhD project will focus on the two essential stages within this workflow: volumetric video compression and post-processing. Inspired by recent advances in deep video compression and rendering, we will research novel AI-based production workflows for volumetric video content to significantly improve the coding efficiency and the perceptual quality of the final rendered content.

This project is funded by the MyWorld UKRI Strength in Places Programme at the University of Bristol. It fits well within one of the core research areas outlined in the MyWorld programme on video production and communications for immersive content. The student working on this project will gain experience on immersive video production workflows, from capture and contribution to live editorial production and delivery at scale to a growing variety of XR capable devices.

URL for further information: http://www.myworld-creates.com/ 

Candidate Requirements

Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree. 

If English is not your first language, you need to meet this profile level:

Profile E

Further information about English language requirements and profile levels.

Basic skills and knowledge required

Essential: Excellent analytical skills and experimental acumen.

Desirable: A background understanding in one or more of the following:

Video compression

3D Computer vision

Artificial intelligence / Machine Learning / Deep Learning

Application Process

  • All candidates should submit a full CV and covering letter to myworldrecruitment@myworld-creates.com (FAO: Professor David R. Bull).
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded): https://www.bristol.ac.uk/study/postgraduate/apply/
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • This post remains open until fulfilled.

For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions sceem-pgr-admissions@bristol.ac.uk

Funding Notes

Stipend at the UKRI minimum stipend level (£16,062 p.a. in 2022/23) will also cover tuition fees at the UK student rate and an industrial top-up. Funding is subject to eligibility status and confirmation of award.
To be treated as a home student, candidates must meet one of these criteria:

  • be a UK national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.

MyWorld Postdoctoral Research Associate Posts – UKRI Strength in Places Programme

The Role

The newly established MyWorld research programme, led by the University of Bristol, is a flagship five-year, £46m R&D programme collaborating with numerous industrial and academic organisations. The MyWorld Creative Technologies Hub is now expanding in line with its mission to grow the West of England’s Creative Industries Cluster with major investments in new facilities and staff at all levels.

We are now offering unique opportunities for four Post-Doctoral Research Associates in

  1. AI methods for Video Post-Production
  2. Robot Vision for Creative Technologies
  3. Perceptually Optimised Video Compression (sponsored by our collaborator, Netflix, in Los Gatos, USA).
  4. Visual Communications

Contract and Salary

All these four posts are based in the Faculty of Engineering, University of Bristol, and the salary range is Grade I = £34,304 – £38,587 per annum or Grade J = £38,587 – £43,434 per annum.

Application Information

We anticipate that candidates will possess a good honours degree along with a PhD in related disciplines, or extensive relevant industrial/commercial experience. We expect a high standard of spoken and written English and the ability to work effectively both independently and as part of a team.

Please following the link provided for each post to access detailed job description and the application system.

MyWorld PhD Scholarships 2022 – UKRI Strength in Places Programme

Introduction

MyWorld is a £46m R&D programme, awarded to the University of Bristol, under the leadership of Professor David Bull, with £30m from the UKRI Strength in Places Fund (SIPF) and a further £16m committed from an alliance of more than 30 industry and academic organisations. SIPF is a UK Research and Innovation (UKRI) flagship competitive funding scheme that takes a place-based approach to research and innovation funding with the aim of creating significant local economic growth. It is a major intervention by UK Government to explore the potential of devolved R&D funding.

There are now a number of opportunities for outstanding candidates to join the MyWorld team as PhD students, who are expected to start from Sept 2022. Opportunities for innovation and investigation exist across the MyWorld portfolio, including content acquisition and post-production, content delivery and interactivity, and audience understanding.

Role Description

All posts will cover student stipend at a basic rate of £15,609 per annum (2022 rates) with possibility of enhancement by up by £3,000 in some cases. Fees for home (UK-based) students are covered in all cases. Several awards cover fees for EU students and some cover overseas students.

Appointees will be expected to integrate within the MyWorld team, to conduct internationally-leading research, and to contribute to the wider objectives and activities of the programme.  Many of the awards will involve collaboration with our industry partners and would offer the potential of career development through internships as part of the PhD.

Research Focus

The Visual Information Laboratory in Bristol Vision Institute (BVI) and the MyWorld Programme combine to make the University of Bristol a powerhouse for the development of visual media communications. The work of these groups in this area has been supported by world-leading organisations such as Netflix, BBC, BT, NTT and YouTube. The research focus of these PhD studentships will be linked to the strategic objectives of MyWorld, promoting new technology research that underpins the delivery of future experiences and services. Applications are invited in the following areas:

  • Content Acquisition and Post-Production: AI methods in post-production – video denoising, colorisation and enhancement; low light fusion and autofocus ; virtual production technologies; intelligent and automated cinematographies (including drone cinematography); camera tracking and SLAM methods in virtual production; Building interactive worlds – enabling the metaverse; creating re-useable assets for virtual production.
  • Content Delivery and Interactivity: perceptually optimised video compression; dynamic optimisation of streamed video; energy-efficient video coding; new architectures and tools for emerging AoM standards; machine learning methods for video delivery; perceptual video quality metrics; transcoding methods for user generated content; volumetric video coding; coding beyond compression, media network optimisation.
  • Audience Understanding: Methods for assessing quality of experience and immersion; biometrics, and fusion of these, for audience understanding; motion magnification for user engagement; creation and exploitation of visual field maps.
  • Experimental Productions: Enabling the metaverse; building environments for virtual rehearsal; building and evaluating immersive natural history experiences.

Application Procedure and Selection Process

  • All candidates should submit a full CV and covering letter to myworldrecruitment@myworld-creates.com (FAO: the contact of the research topic that you are applying for) by the deadline.
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded):
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • Candidates will be invited to give a presentation prior to their formal interview, as part of the final selection process.
  • All posts will remain open until filled.

Contact

For an informal discussion about the scholarships, please contact:

Job Description Document

Detailed role description and research topics can be found in the [JD document] and at

Learning-optimal Deep Visual Compression

David Bull, Fan Zhang and Paul Hill

INTRODUCTION

Deep Learning systems offer state-of-the-art performance in image analysis, outperforming conventional methods. Such systems offer huge potential across military and commercial domains including: human/target detection and recognition and spatial localization/mapping. However, heavy computational requirements limit their exploitation in surveillance applications, particularly airborne, where low-power embedded processing and limited bandwidth are common constraints.

Our aim is to explore deep learning performance whilst reducing processing and communication overheads, by developing learning-optimal compression schemes trained in conjunction with detection networks.

ACKNOWLEDGEMENT

This work has been funded by DASA Advanced Vision 2020 Programme.

 

 

A Simulation Environment for Drone Cinematography

Fan Zhang, David Hall, Tao Xu, Stephen Boyle and David Bull

INTRODUCTION

Simulations of drone camera platforms based on actual environments have been identified as being useful for shot planning, training and re­hearsal for both single and multiple drone operations. This is particularly relevant for live events, where there is only one opportunity to get it right on the day.

In this context, we present a workflow for the simulation of drone operations exploiting realistic background environments constructed within Unreal Engine 4 (UE4). Methods for environmental image capture, 3D reconstruction (photogrammetry) and the creation of foreground assets are presented along with a flexible and user-friendly simulation interface. Given the geographical location of the selected area and the camera parameters employed, the scanning strategy and its associated flight parameters are first determined for image capture. Source imagery can be extracted from virtual globe software or obtained through aerial photography of the scene (e.g. using drones). The latter case is clearly more time consuming but can provide enhanced detail, particularly where coverage of virtual globe software is limited.

The captured images are then used to generate 3D background environment models employing photogrammetry software. The reconstructed 3D models are then imported into the simulation interface as background environment assets together with appropriate foreground object models as a basis for shot planning and rehearsal. The tool supports both free-flight and parameterisable standard shot types along with programmable scenarios associated with foreground assets and event dynamics. It also supports the exporting of flight plans. Camera shots can also be designed to pro­vide suitable coverage of any landmarks which need to appear in-shot. This simulation tool will contribute to enhanced productivity, improved safety (awareness and mitigations for crowds and buildings), improved confidence of operators and directors and ultimately enhanced quality of viewer experience.

DEMO VIDEOS

Boat.mp4

Cyclist.mp4

REFERENCES

[1] F. Zhang, D. Hall, T. Xu, S. Boyle and D. Bull, “A Simulation environment for drone cinematography”, IBC 2020.

[2] S. Boyle, M. Newton, F. Zhang and D. Bull, “Environment Capture and Simulation for UAV Cinematography Planning and Training”,  EUSIPCO, 2019