Friesian Cattle Identification

Tilo Burghardt, Will Andrew, Jing Gao, Neill Campbell, Andrew Dowsey, S Hannuna, Colin Greatwood

Holstein Friesian cattle are the highest milk-yielding bovine type; they are economically important and especially prevalent within the UK. Identification and traceability of these cattle is not only required by exportand consumer demands, but in fact many countries have introduced legally mandatory frameworks.

This line of work has shown that robust individual Holstein Friesian cattle identification can be implemented automatically and non-intrusively using computer vision pipelines fuelled by architectures utilising deep neural networks. In essence, the systems biometrically interpret the unique black-and-white coat markings to identify individual animals robustly; identification can for instance happen via fixed in-barn cameras or via drones in the field.

This work is being conducted with the Farscope CDT, VILab and BVS.

Example Training Set of a Small Herd

Related Publications

W Andrew, C Greatwood, T Burghardt. Aerial Animal Biometrics: Individual Friesian Cattle Recovery and Visual Identification via an Autonomous UAV with Onboard Deep Inference. 32nd IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 237-243, November 2019. (DOI:10.1109/IROS40897.2019.8968555), (Arxiv PDF), (CVF Extended Abstract at WACVW2020)


W Andrew, C Greatwood, T Burghardt. Deep Learning for Exploration and Recovery of Uncharted and Dynamic Targets from UAV-like Vision. 31st IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1124-1131, October 2018. (DOI:10.1109/IROS.2018.8593751), (IEEE Version), (Dataset GTRF2018), (Video Summary)


W Andrew, C Greatwood, T Burghardt. Visual Localisation and Individual Identification of Holstein Friesian Cattle via Deep Learning. Visual Wildlife Monitoring (VWM) Workshop at IEEE International Conference of Computer Vision (ICCVW), pp. 2850-2859, October 2017. (DOI:10.1109/ICCVW.2017.336), (Dataset FriesianCattle2017), (Dataset AerialCattle2017), (CVF Version)


W Andrew, S Hannuna, N Campbell, T Burghardt. Automatic Individual Holstein Friesian Cattle Identification via Selective Local Coat Pattern Matching in RGB-D Imagery. IEEE International Conference on Image Processing (ICIP), pp. 484-488, ISBN: 978-1-4673-9961-6, September 2016. (DOI:10.1109/ICIP.2016.7532404), (Dataset FriesianCattle2015)

Great Ape Facial Recognition and Identification

Tilo Burghardt, O Brookes, CA Brust, M Groenenberg, C Kaeding, HS Kuehl, M Manguette, J Denzler, AS Crunchant, M Egerer, A Loos, K Zuberbuehler, K Corogenes, V Leinert, L Kulik

In order to evaluate the status of great ape populations and the effectiveness of conservation interventions accurate monitoring tools are needed. The utilisation and interpretation of field photography and inexpensive autonomous cameras can often provide detailed information about species presence, abundance, behaviour, welfare or population dynamics.

Caption: The figure depicts row-by-row left-to-right the gorillas: Ayana, Kukuena, Kala, Touni, Afia, Kera from Bristol Zoo. The large image on the far right is of Jock. All gorillas have unique facial features.

Together with researchers from various institutions including the Max Planck Institute for Evolutionary Anthropology, the University of Jena, and Bristol Zoo Gardens, we co-developed various computer vision and deep learning systems for detecting great ape faces in imagery and for identifying individual animals based on their unique facial features. These techniques can be applied in the wild using camera traps or manual photography, or in captive setting for studying welfare and behaviours. We are also working with the Gorilla Game Lab to utilise this technology.

Caption: Recognition example of Kera at Bristol Zoo.

Related Publications

O Brookes, T Burghardt. A Dataset and Application for Facial Recognition of Individual Gorillas in Zoo Environments. In press. Proc. 25th International Conference on Pattern Recognition (ICPR) Workshop on Visual Observation and Analysis of Vertebrate And Insect Behavior (VAIB), January 2021. (Arxiv PDF)


CA Brust, T Burghardt, M Groenenberg, C Kaeding, HS Kuehl, M Manguette, J Denzler. Towards Automated Visual Monitoring of Individual Gorillas in the Wild. Visual Wildlife Monitoring (VWM) Workshop at IEEE International Conference of Computer Vision (ICCVW), pp. 2820-2830, October 2017. (DOI:10.1109/ICCVW.2017.333), (Dataset Gorilla2017), (CVF Version)


AS Crunchant, M Egerer, A Loos, T Burghardt, K Zuberbuehler, K Corogenes, V Leinert, L Kulik, HS Kuehl. Automated Face Detection for Occurrence and Occupancy Estimation in Chimpanzees. American Journal of Primatology. Vol 79, Issue 3, ISSN: 1098-2345. March 2017. (DOI 10.1002/ajp.22627)


HS Kuehl, T Burghardt. Animal Biometrics: Quantifying and Detecting Phenotypic Appearance. Trends in Ecology and Evolution, Vol 28, No 7, pp. 432-441, July 2013.
(DOI:10.1016/j.tree.2013.02.013)

VI Lab researchers working with BT to enhance the experience of live events

5G Edge XR AR Dance

Press release, 8 October 2020

Computer vision experts from the University of Bristol are part of a new consortium, led by BT , that is driving the technology that will revolutionise the way we consume live events, from sports such as MotoGP and boxing, to dance classes.

The 5G Edge-XR project, one of seven projects funded by the Department for Digital, Culture, Media & Sport (DCMS) as part its 5G Create programme, aims to demonstrate new exciting ways that live sport and arts can be delivered remotely using immersive Virtual and Augmented Reality (VR/AR) technology combined with the new 5G network and advanced edge computing.

The 5G Edge-XR consortium, which is led by BT, also includes; The GRID Factory, Condense Reality, Salsa Sound, and Dance East. The project started in September 2020 and will run until March 2022, with a budget of over £4M, with £1.5M coming from DCMS.

The University of Bristol team is based in the Visual Information Lab (VI Lab) and will be working primarily with Condense Reality (CR). The Bristol-based SME, whose CTO and CSO are both Bristol graduates, has developed a state-of-the-art volumetric capture system, capable for generating live 3-D models for AR applications. This brings the prospect of viewing live sports and dance classes in 3-D and in your home, as though you were there in person.

The Bristol team is led by Professors Andrew Calway and David Bull, who will bring their expertise in computer vision and video coding to enhance the system developed by CR. They will be working with researchers from the BT Labs in Adastral Park, Suffolk, which is recognised for its global leadership in 5G research and standards development.

“This is a very exciting opportunity for the lab and our students, enabling us to engage in important research and knowledge and skills transfer to a local company, whilst at the same time being part of a national programme to showcase what is possible using AR and the new 5G technology,” said Professor Calway.

Professor Tim Whitley, BT’s MD of Applied Research, said: “The approaches we’re exploring with these teams in Bristol can transform how we experience sport, music, drama and education. With access to live cultural events and sport being limited by the ongoing pandemic, this project seems more relevant and urgent than ever.”

Nick Fellingham, CEO and Co-Founder of CR, added: “The 5G Edge-XR project is a real boost for us and working with the University will help us to push our technology to new levels, giving us that important edge in the market.”

End

Learning-optimal Deep Visual Compression

David Bull, Fan Zhang and Paul Hill

INTRODUCTION

Deep Learning systems offer state-of-the-art performance in image analysis, outperforming conventional methods. Such systems offer huge potential across military and commercial domains including: human/target detection and recognition and spatial localization/mapping. However, heavy computational requirements limit their exploitation in surveillance applications, particularly airborne, where low-power embedded processing and limited bandwidth are common constraints.

Our aim is to explore deep learning performance whilst reducing processing and communication overheads, by developing learning-optimal compression schemes trained in conjunction with detection networks.

ACKNOWLEDGEMENT

This work has been funded by DASA Advanced Vision 2020 Programme.

 

 

A Simulation Environment for Drone Cinematography

Fan Zhang, David Hall, Tao Xu, Stephen Boyle and David Bull

INTRODUCTION

Simulations of drone camera platforms based on actual environments have been identified as being useful for shot planning, training and re­hearsal for both single and multiple drone operations. This is particularly relevant for live events, where there is only one opportunity to get it right on the day.

In this context, we present a workflow for the simulation of drone operations exploiting realistic background environments constructed within Unreal Engine 4 (UE4). Methods for environmental image capture, 3D reconstruction (photogrammetry) and the creation of foreground assets are presented along with a flexible and user-friendly simulation interface. Given the geographical location of the selected area and the camera parameters employed, the scanning strategy and its associated flight parameters are first determined for image capture. Source imagery can be extracted from virtual globe software or obtained through aerial photography of the scene (e.g. using drones). The latter case is clearly more time consuming but can provide enhanced detail, particularly where coverage of virtual globe software is limited.

The captured images are then used to generate 3D background environment models employing photogrammetry software. The reconstructed 3D models are then imported into the simulation interface as background environment assets together with appropriate foreground object models as a basis for shot planning and rehearsal. The tool supports both free-flight and parameterisable standard shot types along with programmable scenarios associated with foreground assets and event dynamics. It also supports the exporting of flight plans. Camera shots can also be designed to pro­vide suitable coverage of any landmarks which need to appear in-shot. This simulation tool will contribute to enhanced productivity, improved safety (awareness and mitigations for crowds and buildings), improved confidence of operators and directors and ultimately enhanced quality of viewer experience.

DEMO VIDEOS

Boat.mp4

Cyclist.mp4

REFERENCES

[1] F. Zhang, D. Hall, T. Xu, S. Boyle and D. Bull, “A Simulation environment for drone cinematography”, IBC 2020.

[2] S. Boyle, M. Newton, F. Zhang and D. Bull, “Environment Capture and Simulation for UAV Cinematography Planning and Training”,  EUSIPCO, 2019

BVI-SR: A Study of Subjective Video Quality at Various Spatial Resolutions

Alex Mackin, Mariana Afonso, Fan Zhang, and David Bull

ABSTRACT

BVI-SR contains 24 unique video sequences at a range of spatial resolutions up to UHD-1 (3840p). These sequences were used as the basis for a large-scale subjective experiment exploring the relationship between visual quality and spatial resolution when using three distinct spatial adaptation filters (including a CNN-based super-resolution method). The results demonstrate that while spatial resolution has a significant impact on mean opinion scores (MOS), no significant reduction in visual quality between UHD-1 and HD resolutions for the superresolution method is reported. A selection of image quality metrics were benchmarked on the subjective evaluations, and analysis indicates that VIF offers the best performance.

SOURCE SEQUENCES

DATABASE DOWNLOAD

[DOWNLOAD] subjective data, instructions and related file.

[DOWNLOAD] all videos from University of Bristol Research Data Storage Facility.

[DOWNLOAD] all videos from MS OneDrive. Please fill a simple registration form to get access. The MS OneDrive verification code will be sent within up to 2 days after we receive the form. Please note the code may be in your Spam box.

 

If this content has been mentioned in a research publication, please give credit to the University of Bristol, by referencing the following paper:

[1] A. Mackin, M. Afonso, F. Zhang and D. Bull, “A study of subjective video quality at various spatial resolutions”, IEEE ICIP, 2018.

[2] A. Mackin, M. Afonso, F. Zhang and D. Bull,”BVI-SR Database“, 2020.