The University of Bristol is offering the opportunity for outstanding candidates to apply for a strategically important role linked to the newly established £46m MyWorld Strength in Places programme. The Professor in Creative Media Technologies will be expected to contribute to leadership and development of the MyWorld programme, to carry out internationally-leading research and to take an active role in providing high quality and innovative teaching in related areas.
The research focus of this post is linked to the strategic objectives of MyWorld including: the development and delivery of new immersive experiences and services, new acquisition formats, mixed and extended realities, advanced production technologies (e.g. Virtual Production), AI based workflows, intelligent post-production / VFX techniques, data driven visual analytics and interactivity, metaverse-related technologies, user experience evaluation, enhanced remote access to live experiences and energy efficient media delivery solutions for future (e.g.volumetric) formats. The appointee will be affiliated to the Visual Information Laboratory (VI-Lab) in Bristol Vision Institute (BVI).
Suitable candidates will be recognised internationally for their contributions in the field of science or technology relevant to the creative industries, with an outstanding research track record including but not limited to the areas listed above. They will have excellent project management and research leadership skills, a strong track record of attracting significant research income and experience of delivering major research projects. They will be innovative and collaborative with experience of working with partners in the creative sector and across disciplines in academia and industry. We anticipate that candidates will possess a good honours degree in Electrical/Electronic Engineering, Computer Science or a related discipline along with a relevant PhD, or extensive relevant industrial experience.
The awards will be sponsored by Facebook and Netflix.
Low light scenes often come with acquisition noise, which not only disturbs the viewers, but it also makes video compression challenging. These types of videos are often encountered in cinema as a result of an artistic perspective or the nature of a scene. Other examples include shots of wildlife (e.g. Mobula rays at night in Blue Planet II), concerts and shows, surveillance camera footage and more. Inspired by all the above, we are organising a challenge on encoding low-light captured videos. This challenge intends to identify technology that improves the perceptual quality of compressed low-light videos beyond the current state of the art performance of the most recent coding standards, such as HEVC, AV1, VP9, VVC, etc. Moreover, this will offer a good opportunity for both experts in the fields of video coding and image enhancement to address this problem. A series of subjective tests will be part of the evaluation, the results of which can be used in a study of the tradeoff between artistic direction and the viewers’ preferences, such as mystery movies and some investigation scenes in the film.
Participants will be requested to deliver bitstreams with pre-defined maximum target rates for a given set of sequences, a short report describing their contribution and a software executable for running the proposed methodology and then can reconstruct the decoded videos by the given timeline. Participants are also encouraged to submit a paper for publication in the proceedings, and the best performers shall be prepared to present a summary of the underlying technology during the ICME session. The organisers will cross-validate and perform subjective tests to rank participant contributions.
Expression of interest to participate in the Challenge: 29/11/201910/12/2019
Availability of test sequences for participants: 01/12/2019upon registration
Availability of anchor bitstreams and software package: 15/12/2019
Submission of encoded material: 13/03/202027/03/2020
Submission of Grand Challenge Papers: 13/03/202027/03/2020
Host: The challenge will be organised by Dr. Nantheera Anantrasirichai, Dr. Paul Hill, Dr. Angeliki Katsenou, Ms Alexandra Malyugina, and Dr. Fan Zhang, Visual Information Lab, University of Bristol, UK.