IBC 2022 Innovation Award win for UK 5G Edge-XR project

The University of Bristol’s Visual Information Laboratory, part of Bristol Vision Institute is a member of a consortium led by BT Sport that has won the IBC 2022 Innovation Award in the Content Everywhere category. The consortium beat outstanding shortlisted entries from PGA TOUR and SKY Sports. The work resulted from the innovate UK 5G Edge-XR project which can viewed on YouTube

In collaboration with local start-up, Condense Reality, who have developed a volumetric video capture rig, which captures the 3D essence of people in real time and facilitates digital transportation to a different location, the project enables users to watch a holographic view of the live action, replaying the footage via a game engine from any angle in the comfort of their home.

The experience relies on the speed and low latency of 5G connectivity combined with edge of network computing, to move some of processor load associated with volumetric rendering away from the devices,  enabling experiences that have never been seen before.

The University of Bristol’s work in the project, associated with advanced video compression and rendering of volumetric content, led by Andrew Calway, David Bull, Aaron Zhang and Angeliki Katsenou is now being further developed through the MyWorld programme, collaboratively with BT and Condense Reality.

Further information

CondenseSalsa SoundTHE GRID FACTORYUniversity of BristolDANCE EAST

MyWorld PhD Scholarship: Low Light Fusion and Autofocus for Advanced Wildlife Coverage

About the Project

Natural History filmmaking presents many challenges. For example, filming in low light or using modalities such as infra-red can result in noisy, low-resolution images, or can suffer from poor contrast range and colours. Also, the camera sensor is normally operating with high ISO levels and hence has wide aperture and extremely shallow depth of field.

This project will enable production workflow to push the boundaries of what is possible in terms of new image acquisition and processing methods for telling stories of the natural world. The project will look at image-based approaches to understanding and explaining the natural world, for example by combining multiple imaging modalities such as visible and infra-red. It will investigate means of autofocus for low light content using spectral, spatial, or other image processing methods to control the focus action. Machine learning methods to estimate focus from blur after training will also be explored.

Launched in April 2021, MyWorld is a brand-new five-year programme, the flagship for the UK’s creative technology sector, and is part of a UK-wide exploration into devolved research and development funding (UKRI video). Led by the University of Bristol, MyWorld will position the South West as an international trailblazer in screen-based media. This £46m programme will bring together 30 partners from Bristol and Bath’s creative technologies sector and world-leading academic institutions, to create a unique cross-sector consortium. MyWorld will forge dynamic collaborations to progress technological innovation, deliver creative excellence, establish, and operate state of the art facilities, offer skills training and drive inward investment, raising the region’s profile on the global stage. 

URL for further information: http://www.myworld-creates.com/ 

Candidate Requirements

Applicants must hold/achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered on an exceptional basis, provided they hold a first-class undergraduate degree. Please note, acceptance will also depend on evidence of readiness to pursue a research degree. 

If English is not your first language, you need to meet this profile level:

Profile E

Further information about English language requirements and profile levels.

Basic skills and knowledge required

Essential: Excellent analytical skills and experimental acumen.

Desirable: A background understanding in one or more of the following:

  • Image processing
  • Artificial intelligence/Machine learning/Deep learning
  • Computational Imaging / Computational Photography

Application Process

  • All candidates should submit a full CV and covering letter to myworldrecruitment@myworld-creates.com (FAO: Professor David R. Bull).
  • Formal applications for PhD are not essential at this stage, but can be submitted via the University of Bristol homepage (clearly marked as MyWorld funded):  https://www.bristol.ac.uk/study/postgraduate/apply/
  • A Selection Panel will be established to review all applications and to conduct interviews of short-listed candidates.
  • This post remains open until filled.

For questions about eligibility and the application process please contact SCEEM Postgraduate Research Admissions sceem-pgr-admissions@bristol.ac.uk


Funding Notes

Stipend at the UKRI minimum stipend level (£16,062 p.a. in 2022/23) will also cover tuition fees at the UK student rate. Funding is subject to eligibility status and confirmation of award.
To be treated as a home student, candidates must meet one of these criteria:

  • be a UK/EU national (meeting residency requirements)
  • have settled status
  • have pre-settled status (meeting residency requirements)
  • have indefinite leave to remain or enter.

Analysis of Coral using Deep Learning

Tilo Burghardt, Ainsley Rutterford, Leonardo. Bertini, Erica J. Hendy, Kenneth Johnson, Rebecca Summerfield

Animal biometric systems can also be applied to the remains of living beings – such as coral skeletons. These fascinating structures can be scanned via 3D tomography and made available to computer vision scientists via resulting image stacks.

In this project we investigated the efficacy of deep learning architectures such as U-Net on such image stacks in order to find and measure important features in coral skeletons automatically. One such feature is constituted by growth bands of the colony, which are extracted/approximated by our system and superimposed on a coral slice in the image below. The project provides a first proof-of-concept that machines can, given sufficiently clear samples, perform similarly to humans in many respects when identifying associated growth and calcification rates exposed from skeletal density-banding. This is a first step towards automating banding measurements and related analysis.

Coral skeletal density-banding extracted via Deep Learning

This work was supported by NERC GW4+ Doctoral Training Partnership and is part of 4D-REEF, a Marie Sklodowska-Curie Innovative Training Network funded by European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 813360.

Code Repository at https://github.com/ainsleyrutterford/deep-learning-coral-analysis

Aerial Animal Biometrics

Tilo Burghardt, Will Andrew, Colin Greatwood

Traditionally animal biometric systems represent and detect the phenotypic appearance of species, individuals, behaviors, and morphological traits via passive camera settings – be this camera traps or other fixed camera installations.

In this line of work we implemented for the first time a full biometric pipeline onboard a autonomous UAV in order to gain complete autonomous agency that can be used to adjust acquisition scenarios to important settings such as individual identification in freely moving herds of cattle.

In particular, we have built a computationally-enhanced M100 UAV platform with an on-board deep learning inference system for integrated computer vision and navigation able to autonomously find and visually identify by coat pattern individual HolsteinFriesian cattle in freely moving herds.We evaluate the performance of components offline, and also online via real-world field tests of autonomous low-altitude flight in a farm environment. The proof-of-concept system is a successful step towards autonomous biometric identification of individual animals from the air in open pasture environments and in-side farms for tagless AI support in farming and ecology.

Caption: Overview of the Autonomous Cattle ID Drone System.
Caption: Video Summary of Cattle ID Drone System

This work was conducted in collaboration with Farscope CDT, VILab and BVS.

Related Publications

W Andrew, C Greatwood, T Burghardt. Aerial Animal Biometrics: Individual Friesian Cattle Recovery and Visual Identification via an Autonomous UAV with Onboard Deep Inference. 32nd IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 237-243, November 2019. (DOI:10.1109/IROS40897.2019.8968555), (Arxiv PDF)


W Andrew, C Greatwood, T Burghardt. Deep Learning for Exploration and Recovery of Uncharted and Dynamic Targets from UAV-like Vision. 31st IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1124-1131, October 2018. (DOI:10.1109/IROS.2018.8593751), (IEEE Version), (Dataset GTRF2018), (Video Summary)


W Andrew, C Greatwood, T Burghardt. Visual Localisation and Individual Identification of Holstein Friesian Cattle via Deep Learning. Visual Wildlife Monitoring (VWM) Workshop at IEEE International Conference of Computer Vision (ICCVW), pp. 2850-2859, October 2017. (DOI:10.1109/ICCVW.2017.336), (Dataset FriesianCattle2017), (Dataset AerialCattle2017), (CVF Version)


HS Kuehl, T Burghardt. Animal Biometrics: Quantifying and Detecting Phenotypic Appearance. Trends in Ecology and Evolution, Vol 28, No 7, pp. 432-441, July 2013.
(DOI:10.1016/j.tree.2013.02.013)

Great Ape Detection and Behaviour Recognition

Tilo Burghardt, X Yang, F Sakib, M Mirmehdi

The problem of visually identifying the presence and locations of animal species filmed in natural habitats is of central importance for automating the interpretation of large-scale camera trap imagery. This is particularly challenging in scenarios where lighting is difficult, backgrounds are non-static, and major occlusions, image noise, as well as animal camouflage effects occur: filming great apes viacamera traps in jungle environments constitutes one such setting. Finding animals under these conditions and classifying their behaviours are important tasks in order to exploit the filmed material for conservation or biological modelling.

Together with researchers from various institutions including the Max Planck Institute for Evolutionary Anthropology we developed deep learning systems for detecting great apes in challenging imagery in the first place and for identifying animal behaviours exhibited in these camera trap clips once apes have been recognised.


Captions: (top) System Overview for CamTrap Detector. (middle and bottom) Behaviour Recognition Examples, note that the PanAfrican Programme owns the video copyrights.

Acknowledgements: All Copyright of all Images and Videos resides with the PanAfrican Programme at the MPI. We thank them for allowing to use their data for publishing our technical engineering work. We would like to thank the entire team of the Pan African Programme: ‘The Cultured Chimpanzee’ and its collaborators for allowing the use of their data. Please contact the copyright holder Pan African Programme at http://panafrican.eva.mpg.de to obtain the videos used. Particularly, we thank: H Kuehl, C Boesch, M Arandjelovic, and P Dieguez. We would also like to thank: K Zuberbuehler, K Corogenes, E Normand, V Vergnes, A Meier, J Lapuente, D Dowd, S Jones, V Leinert, EWessling, H Eshuis, K Langergraber, S Angedakin, S Marrocoli, K Dierks, T C Hicks, J Hart, K Lee, and M Murai.
Thanks also to the team at https://www.chimpandsee.org. The work that allowed for the collection of the dataset was funded by the Max Planck Society, Max Planck Society Innovation Fund, and Heinz L. Krekeler.
In this respect we would also like to thank: Foundation Ministre de la Recherche Scientifique, and Ministre des Eaux et Forłts in Cote d’Ivoire; Institut Congolais pour la Conservation de la Nature and Ministre de la Recherch Scientifique in DR Congo; Forestry Development Authority in Liberia; Direction des Eaux, Forłts Chasses et de la Conser- vation des Sols, Senegal; and Uganda National Council for Science and Technology, Uganda Wildlife Authority, National Forestry Authority in Uganda.

Related Publications

F Sakib, T Burghardt. Visual Recognition of Great Ape Behaviours in the Wild. In press. Proc. 25th International Conference on Pattern Recognition (ICPR) Workshop on Visual Observation and Analysis of Vertebrate And Insect Behavior (VAIB), January 2021. (Arxiv PDF)

X Yang, M Mirmehdi, T Burghardt. Great Ape Detection in Challenging Jungle Camera Trap Footage via Attention-Based Spatial and Temporal Feature Blending. Computer Vision for Wildlife Conservation (CVWC) Workshop at IEEE International Conference of Computer Vision (ICCVW), pp. 255-262, October 2019. (DOI:10.1109/ICCVW.2019.00034), (CVF Version), (Arxiv PDF), (Dataset PanAfrican2019 Video), (Dataset PanAfrican2019 Annotations and Code)

Fin Identification of Great White Sharks

Tilo Burghardt, Ben Hughes

Recognising individuals repeatedly over time is a basic requirement for field-based ecology and related marine sciences. In scenarios where photographic capture is feasible and animals are visually unique, biometric computer vision offers a non-invasive identification paradigm.

In this line of work we developed the first fully automated biometric ID system for individual animals based on visual body contours. We applied the techniques to great white shark identification. The work was selected as one of the top 10 BMVC’15 papers and subsequently published in IJCV. The work was collaborative with NGO SaveOurSeas Foundation (SoSF) who employed Ben Hughes to extend and apply this work. The system is now being exploited at large scale by SoSF.

Related Publications

B Hughes, T Burghardt. Automated Visual Fin Identification of Individual Great White Sharks. International Journal of Computer Vision (IJCV), Vol 122, No 3, pp. 542-557, May 2017. (DOI:10.1007/s11263-016-0961-y), (Dataset FinsScholl2456)


B Hughes, T Burghardt. Automated Identification of Individual Great White Sharks from Unrestricted Fin Imagery. 26th British Machine Vision Conference (BMVC), pp. 92.1-92.14, ISBN 1-901725-53-7, BMVA Press, September 2015. (DOI:10.5244/C.29.92), (Dataset FinsScholl2456)


B Hughes, T Burghardt. Affinity Matting for Pixel-accurate Fin Shape Recovery from Great White Shark Imagery. Machine Vision of Animals and their Behaviour (MVAB), Workshop at BMVC, pp. 8.1-8.8. BMVA Press, September 2015. (DOI:10.5244/CW.29.MVAB.8), (Dataset FinsScholl2456)