Analysis of Coral using Deep Learning

Tilo Burghardt, Ainsley Rutterford, Leonardo. Bertini, Erica J. Hendy, Kenneth Johnson, Rebecca Summerfield

Animal biometric systems can also be applied to the remains of living beings – such as coral skeletons. These fascinating structures can be scanned via 3D tomography and made available to computer vision scientists via resulting image stacks.

In this project we investigated the efficacy of deep learning architectures such as U-Net on such image stacks in order to find and measure important features in coral skeletons automatically. One such feature is constituted by growth bands of the colony, which are extracted/approximated by our system and superimposed on a coral slice in the image below. The project provides a first proof-of-concept that machines can, given sufficiently clear samples, perform similarly to humans in many respects when identifying associated growth and calcification rates exposed from skeletal density-banding. This is a first step towards automating banding measurements and related analysis.

Coral skeletal density-banding extracted via Deep Learning

This work was supported by NERC GW4+ Doctoral Training Partnership and is part of 4D-REEF, a Marie Sklodowska-Curie Innovative Training Network funded by European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 813360.

Code Repository at https://github.com/ainsleyrutterford/deep-learning-coral-analysis

Aerial Animal Biometrics

Tilo Burghardt, Will Andrew, Colin Greatwood

Traditionally animal biometric systems represent and detect the phenotypic appearance of species, individuals, behaviors, and morphological traits via passive camera settings – be this camera traps or other fixed camera installations.

In this line of work we implemented for the first time a full biometric pipeline onboard a autonomous UAV in order to gain complete autonomous agency that can be used to adjust acquisition scenarios to important settings such as individual identification in freely moving herds of cattle.

In particular, we have built a computationally-enhanced M100 UAV platform with an on-board deep learning inference system for integrated computer vision and navigation able to autonomously find and visually identify by coat pattern individual HolsteinFriesian cattle in freely moving herds.We evaluate the performance of components offline, and also online via real-world field tests of autonomous low-altitude flight in a farm environment. The proof-of-concept system is a successful step towards autonomous biometric identification of individual animals from the air in open pasture environments and in-side farms for tagless AI support in farming and ecology.

Caption: Overview of the Autonomous Cattle ID Drone System.
Caption: Video Summary of Cattle ID Drone System

This work was conducted in collaboration with Farscope CDT, VILab and BVS.

Related Publications

W Andrew, C Greatwood, T Burghardt. Aerial Animal Biometrics: Individual Friesian Cattle Recovery and Visual Identification via an Autonomous UAV with Onboard Deep Inference. 32nd IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 237-243, November 2019. (DOI:10.1109/IROS40897.2019.8968555), (Arxiv PDF)


W Andrew, C Greatwood, T Burghardt. Deep Learning for Exploration and Recovery of Uncharted and Dynamic Targets from UAV-like Vision. 31st IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1124-1131, October 2018. (DOI:10.1109/IROS.2018.8593751), (IEEE Version), (Dataset GTRF2018), (Video Summary)


W Andrew, C Greatwood, T Burghardt. Visual Localisation and Individual Identification of Holstein Friesian Cattle via Deep Learning. Visual Wildlife Monitoring (VWM) Workshop at IEEE International Conference of Computer Vision (ICCVW), pp. 2850-2859, October 2017. (DOI:10.1109/ICCVW.2017.336), (Dataset FriesianCattle2017), (Dataset AerialCattle2017), (CVF Version)


HS Kuehl, T Burghardt. Animal Biometrics: Quantifying and Detecting Phenotypic Appearance. Trends in Ecology and Evolution, Vol 28, No 7, pp. 432-441, July 2013.
(DOI:10.1016/j.tree.2013.02.013)

Great Ape Detection and Behaviour Recognition

Tilo Burghardt, X Yang, F Sakib, M Mirmehdi

The problem of visually identifying the presence and locations of animal species filmed in natural habitats is of central importance for automating the interpretation of large-scale camera trap imagery. This is particularly challenging in scenarios where lighting is difficult, backgrounds are non-static, and major occlusions, image noise, as well as animal camouflage effects occur: filming great apes viacamera traps in jungle environments constitutes one such setting. Finding animals under these conditions and classifying their behaviours are important tasks in order to exploit the filmed material for conservation or biological modelling.

Together with researchers from various institutions including the Max Planck Institute for Evolutionary Anthropology we developed deep learning systems for detecting great apes in challenging imagery in the first place and for identifying animal behaviours exhibited in these camera trap clips once apes have been recognised.


Captions: (top) System Overview for CamTrap Detector. (middle and bottom) Behaviour Recognition Examples, note that the PanAfrican Programme owns the video copyrights.

Acknowledgements: All Copyright of all Images and Videos resides with the PanAfrican Programme at the MPI. We thank them for allowing to use their data for publishing our technical engineering work. We would like to thank the entire team of the Pan African Programme: ‘The Cultured Chimpanzee’ and its collaborators for allowing the use of their data. Please contact the copyright holder Pan African Programme at http://panafrican.eva.mpg.de to obtain the videos used. Particularly, we thank: H Kuehl, C Boesch, M Arandjelovic, and P Dieguez. We would also like to thank: K Zuberbuehler, K Corogenes, E Normand, V Vergnes, A Meier, J Lapuente, D Dowd, S Jones, V Leinert, EWessling, H Eshuis, K Langergraber, S Angedakin, S Marrocoli, K Dierks, T C Hicks, J Hart, K Lee, and M Murai.
Thanks also to the team at https://www.chimpandsee.org. The work that allowed for the collection of the dataset was funded by the Max Planck Society, Max Planck Society Innovation Fund, and Heinz L. Krekeler.
In this respect we would also like to thank: Foundation Ministre de la Recherche Scientifique, and Ministre des Eaux et Forłts in Cote d’Ivoire; Institut Congolais pour la Conservation de la Nature and Ministre de la Recherch Scientifique in DR Congo; Forestry Development Authority in Liberia; Direction des Eaux, Forłts Chasses et de la Conser- vation des Sols, Senegal; and Uganda National Council for Science and Technology, Uganda Wildlife Authority, National Forestry Authority in Uganda.

Related Publications

F Sakib, T Burghardt. Visual Recognition of Great Ape Behaviours in the Wild. In press. Proc. 25th International Conference on Pattern Recognition (ICPR) Workshop on Visual Observation and Analysis of Vertebrate And Insect Behavior (VAIB), January 2021. (Arxiv PDF)

X Yang, M Mirmehdi, T Burghardt. Great Ape Detection in Challenging Jungle Camera Trap Footage via Attention-Based Spatial and Temporal Feature Blending. Computer Vision for Wildlife Conservation (CVWC) Workshop at IEEE International Conference of Computer Vision (ICCVW), pp. 255-262, October 2019. (DOI:10.1109/ICCVW.2019.00034), (CVF Version), (Arxiv PDF), (Dataset PanAfrican2019 Video), (Dataset PanAfrican2019 Annotations and Code)