The Visual Information Laboratory is a part of the interdisciplinary group, Bristol Vision Institute (BVI), the home of vision science research in Bristol. BVI has been successfully stimulating research interaction and collaboration in science, engineering, Arts and Medicine since its creation in 2008 with the aim of addressing grand challenges in vision research.
BVI runs a regular seminar series which take place during term time on alternate Fridays, between 4.00-6.00pm. BVI Seminars feature both internal and external speakers with expertise in the field of vision and all talks are pitched to have broad appeal and are free to attend and all members of VI Lab are encouraged to attend.
We will be re-launching the seminar series in Autumn 2022 and are aiming to host seminars both online and in-person.
The current programme of seminars can be found by visiting the BVI website.
- BVISS: Action localization without spatiotemporal supervision - 17,10,2017
Dr Cees Snoek – Faculty of Science, University of Amsterdam
Abstract
Understanding what activity is happening where and when in video content is crucial for video computing, communication and intelligence. In the literature, the common tactic for action localization is to learn a deep classifier on hard to obtain spatiotemporal annotations and to apply it at test time on an exhaustive set of spatiotemporal candidate locations. Annotating the spatiotemporal extent of an action in training video is not only cumbersome, tedious, and error prone, it also does not scale beyond a hand full of action categories. In this presentation, I will highlight recent work from my team at the University of Amsterdam in addressing the challenging problem of action localization in video without the need for spatiotemporal supervision. We consider three possible solution paths: 1) the first relies on intuitive user-interaction with points, 2) the second infers the relevant spatiotemporal location from an action class label, and finally, 3) the third derives a spatiotemporal action location from off-the-shelf object detectors and text corpora only. I will discuss the benefit and drawbacks of these three solutions on common action localization datasets, compare with alternatives depending on spatiotemporal supervision, and highlight the potential for future work.
Biography
Cees Snoek received the M.Sc. degree in business information systems in 2000 and the Ph.D. degree in computer science in 2005, both from the University of Amsterdam, The Netherlands. He is currently a director of the QUVA Lab, the joint research lab of Qualcomm and the University of Amsterdam, on deep learning and computer vision. He is also a principal engineer/manager at Qualcomm Research Netherlands and an associate professor at the University of Amsterdam. His research interests focus on video and image recognition. He is recipient of a Veni Talent Award, a Fulbright Junior Scholarship, a Vidi Talent Award, and The Netherlands Prize for Computer Science Research.
http://www.uva.nl/en/profile/s/n/c.g.m.snoek/c.g.m.snoek.html
- BVISS: Learning to synthesize signals and images - 17,10,2017
Dr Sotirios Tsaftaris – School of Engineering, Edinburgh University
Abstract: An increasing population and climate change put pressure on several societally important domains. Health costs are increasing and at the same time feeding the world becomes a challenge. Imaging (and sensing) is central to furthering our understanding of biology not
only in its diagnostic capacity but also in phenotyping variation. This opens the need for several analysis tasks of detection, segmentation, classification etc on the basis of static or dynamic imaging data. Evaluating and designing algorithms that address these tasks relies heavily on real annotated data, of sufficient quality and quantity. Synthetically generating data with ground truth can be a useful alternative. In this seminar I will motivate this using two application domains: medical imaging and plant phenotyping. I will present solutions that learn in data-driven fashion data distributions and mappings that generate or synthesize data using dictionaries or deep neural networks. Our approaches use structured learning and multiple modalities to learn representations of desirable invariance (and covariance). Problems of crossmodal synthesis in MRI and CT are presented as well as the ability to conditionally generate images of plants with specific topological arrangement.
Bio: Dr. Sotirios A. Tsaftaris, obtained his PhD and MSc degrees in Electrical Engineering and Computer Science (EECS) from Northwestern University, USA in 2006 and 2003 respectively. He obtained his Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki, Greece. Currently, he is a Chancellor’s Fellow (Senior Lecturer grade) in the School of Engineering at the University of Edinburgh (UK). He is also a Turing Fellow with the Alan Turing Institute.
From 2006 to 2011, he was a research assistant professor with the Departments of EECS and Radiology, Northwestern University, USA. From 2011-2015, he was with IMT Institute for Advanced Studies, Lucca serving as Director of the Pattern Recognition and Image Analysis Unit.
He is an Associate Editor for the IEEE Journal of Biomedical and Health Informatics and for Digital Signal Processing – Journal (Elsevier). He has organized specialized workshops at ECCV (2014), BMVC (2015), ICCV (2017) and MICCAI (2016,2017), and served as Area Chair for IEEE ICCV (2017) and VCIP (2015). He has also served as guest editor (Machine Vision and Applications; IEEE Transactions on Medical Imaging; and Digital Signal Processing – Software X).
He has received twice the Magna Cum Laude Award by the International Society for Magnetic Resonance in Medicine (ISMRM) in 2012 and 2014, and was a finalist for the Early Career Award, from the Society for Cardiovascular Magnetic Resonance (SCMR) in 2011.
He has authored more than 100 journal and conference papers particularly in interdisciplinary fields and his work is (or has been) supported by the National Institutes of Health (USA), EPSRC & BBSRC (UK), the European Union, the Italian Government, and several non-profits and industrial partners.
His research interests are in machine learning, image analysis (medical image computing), image processing, and distributed computing.
Dr. Tsaftaris is a Murphy, Onassis, and Marie Curie Fellow. He is also member of IEEE, ISMRM, SCMR, and IAPR.
Additional information:
http://tsaftaris.com or https://www.eng.ed.ac.uk/about/people/dr-sotirios-tsaftaris
- BVISS: Augmenting vision, the easy and the hard way - 09,10,2017
Dr Stephen Hicks – Oxford University – Research Fellow in Neuroscience and Visual Prosthetics, Nuffield Department of Clinical Neurosciences
Mobile computing, augmented reality, deep learning. Consumer-grade devices are coming of age with a dazzling array of technologies and potentials. While tech giants search for killer apps, there are sectors of society who have well defined needs that could be met with aspects of these technologies. In many high profile cases, people with sensory or motor deficits have pioneered the use of mobile augmenting technologies that the rest of us are only just becoming aware of. Bionic limbs, cochlear implants and retinal prosthetics have moved from the highly experimental into the FDA approved. The goal of my work has been to develop low-cost and non-invasive vision enhancement systems that not only provide function benefits to those with poor sight, but that are also good looking enough to break through a social barrier often raised against enabling technologies. In my talk I will take an overview of relevant vision enhancement technologies and my groups work developing and validating smart glasses that not only boost an image, but can also provide autonomous and semi intelligent descriptions of the world using machine learning.
Biography
Stephen wears two hats that look quite similar. He is a Research Lecturer in neuroscience and visual prosthetics at the University of Oxford and runs a small team developing and testing wearable displays to boost vision for people with severe visual impairments. He is also a Founding Director and Technical Lead at a london-based startup called OxSight where he manages a small team developing commercially feasible smart glasses to boost vision and quality of life of blind and partially sighted people. Stephen has a multidisciplinary approach that combines machine learning and computer vision with novel cameras and displays to form images that are easy to see and understand for people with poor vision. He was an Enterprise Fellow of the Royal Academy of Engineering and received early career awards such as the Royal Society Brian Mercer Award for Innovation and lead the team that won the 2014 Google Global Impact Challenge. He resides in London and desperately holds onto his Australian accent.
- BVI Seminar, Filipe Cristino, Bangor University - 15,06,2017
Seminar title, abstract and biography to be announced
- BVI Seminar: Attentional selection of colour is determined by both cone-based and hue-based representations - 07,06,2017
Jasna Martinovic, University of Aberdeen
What is the nature of representations that sustain attention to colour? In other words, is attention to colour predominantly determined by the low-level, cone-opponent chromatic mechanisms established at subcortical processing stages, or by the multiple narrowly-tuned higher-level chromatic mechanisms established in the cortex? These questions remain unresolved in spite of decades of research. In an attempt to address this problem, we conducted a series of electroencephalographic (EEG) studies that examined cone-opponent and hue-based contributions to colour selection. We used a feature-based attention paradigm, in which spatially overlapping, flickering random dot kinematograms (RDKs) of different colours are presented and participants are asked to selectively attend to a colour in order to detect brief, coherent motion intervals, ignoring any such events in the unattended colours. Each flickering colour drives a separate steady-state visual evoked potential (SSVEP), a response whose amplitude increases when that colour is attended. In our studies, behavioural performance and SSVEPs are thus taken as indicators of selective attention. The first study demonstrated that at least some of the observed cone-opponent attentional effects can be explained by asymmetric summation of signals from different cone-opponent channels with luminance at early cortical sites (V1-V3). This indicates that there might be cone-mechanism-specific optimal contrast ranges for combining colour and luminance signals. The second study demonstrated that hue-based contributions can also be observed concurrently with the aforementioned low-level, cone-opponent effects. Proximity and linear separability of targets and distractors in a hue-based colour space was shown to be another determinant of effective colour selection. In conclusion, attention to colour should be examined across the full range of chromoluminance space, task space and dependent measure space. Current evidence indicates that multiple representations contribute to selection of colour, and that depending on the stimulus attributes, task demands, and the attributes of the applied measures, it is possible to observe a spectrum of effects ranging from purely cone-opponent to largely hue-based.
Biography:
Dr Jasna Martinovic is a senior lecturer at the University of Aberdeen. She completed her PhD at the University of Leipzig, followed by postdoctoral work at the University of Liverpool. Her research investigates how colour and luminance signals feed into mid and higher-level stages of visual perception, as well as how they are sampled by visual attention. - BVI Seminar: Eye Movements in Low and Normally Sighted Vision - 03,05,2017
Brian Sullivan, University of Bristol, School of Experimental Psychology
I will present two studies examining human eye movements and discuss my role at the University of Bristol. The first study concerns patients with central vision loss who often adopt a preferred retinal locus (PRL), a region in peripheral vision used for fixation as alternative to the damaged fovea. A common clinical approach to assess the PRL is to record monocular fixation behavior of a small stimulus using a scanning laser ophthalmoscope. Using a combination of visual field tests and eye tracking, we tested how well the ‘fixational PRL’ generalizes to PRL use during a pointing task. Our results suggest that measures of the fixational PRL do not sufficiently capture the PRL variation exhibited while pointing, and can inform patient therapy and future research. In the second study, eye movements from eight participants were recorded with a mobile eye tracker. Participants performed five everyday tasks: Making a sandwich, transcribing a document, walking in an office and a city street, and playing catch with a flying disc. Using only saccadic direction and amplitude time series data, we trained a hidden Markov model for each task and were then able to classify unlabeled data. Lastly, I will briefly describe my role in the GLANCE Project at the University of Bristol. We are an interdisciplinary group in the departments of Experimental Psychology and Computer Science, sponsored by the EPSRC to make a wearable assistive device that would monitor behavior in tasks and present short video clips to guide behavior in real-time
- BVI Seminar: Exogenous visual attention and the primary visual cortex - 10,03,2017
Zhaoping Li, University College London
I will present a theory that primary visual cortex creates a saliency map to guide attention exogenously, and show how this explains and predicts experimental data in physiology and in visual behavior. Implications of this theory will be discussed.
Biography: I obtained my B.S. in Physics in 1984 from Fudan University, Shanghai, and Ph.D. in Physics in 1989 from California Institute of Technology. I was a postdoctoral researcher in Fermi National Laboratory in Batavia, Illinois USA, Institute for Advanced Study in Princeton New Jersey, USA, and Rockefeller University in New York USA. I have been a faculty member in Computer Science in Hong Kong University of Science and Technology, and was a visiting scientist at various academic institutions. In 1998, I helped to found the Gatsby Computational Neuroscience Unit in University College London. Currently, I am a Professor of computational neuroscience in the Department of Computer Science in University College London. My research experience throughout the years ranges from areas in high energy physics to neurophysiology and marine biology, with most experience in understanding the brain functions in vision, olfaction, and in nonlinear neural dynamics. In late 90s and early 2000s, I proposed a theory (which is being extensively tested) that the primary visual cortex in the primate brain creates a saliency map to automatically attract visual attention to salient visual locations. I am the author of Understanding Vision: theory, models, and data , Oxford University Press, 2014.
- BVI Seminar: Diverted by dazzle: testing the ‘motion dazzle’ hypothesis. - 10,03,2017
Anna Hughes, University College London
Abstract: `Motion dazzle’ is the hypothesis that certain types of patterns, such as high contrast stripes and zigzags, can cause misperceptions in the speed and direction perception of moving targets. Motion dazzle is relevant to both ecological questions, including why striped patterning may have evolved in animals such as zebras, and also for camouflage design for human purposes. My work addresses the question of whether motion dazzle exists, and what mechanisms may underlie it if it does. I use an interdisciplinary approach, combining techniques from psychophysics and behavioural ecology in testing human subjects. I present data showing that targets with striped markings are among the hardest for humans to ‘capture’ in a touch screen task, but that these effects may depend upon factors such as target orientation and whether multiple targets are present. I also present psychophysical data that shows that subjects make small direction judgement errors that depend upon the orientation of the striped pattern on a target relative to the direction of motion, suggesting a possible mechanism for motion dazzle effects. Finally, I discuss future directions for my research, including a new project involving large scale data collection using citizen science methods.
Biography: Anna carried out her PhD at the University of Cambridge under the supervision of Dr David Tolhurst and Dr Martin Stevens, focusing on visual motion perception and how this is affected by camouflage patterns. She now works as a Teaching Fellow in Visual Perception at University College London.
- BVI Seminar: Visual concealment as foreign policy: camouflage as signaling friends and foes. - 10,03,2017
László Tálas, Camo Lab, University of Bristol
Abstract: Why do armies operating in the same environment (e.g. temperate woodland) wear markedly different dress? The primary function of military camouflage is generally understood to be concealment, however the vast diversity of camouflage patterns (over 600 patterns in the past century) suggests additional design factors. One hypothesis is that camouflage patterns can also act as signals of alliance and aiding soldiers to distinguish friend from foe. On the other hand, newly independent states can endorse their identity by issuing distinctive camouflage. In both cases designs must remain constrained to function as adequate concealment. The aim of this talk is to demonstrate how a phylogenetic model can be useful for testing these hypotheses. In order to quantify similarity between patterns, I used methods from computer vision to compare their texture and colour. Camouflage of countries can be represented as phylogenies as temporal information (e.g. when patterns were issued) is readily available. Combining computer vision-derived metrics and phylogenetic analysis, I show how certain “design drifts” can be detected throughout the history of camouflage uniforms.
Biography: László completed his PhD at University of Bristol, with his research focusing on the cultural evolution of camouflage patterns, supervised by Innes Cuthill, Dave Bull, and Gavin Thomas. He now works as a postdoctoral research associate in the Camo Lab at University of Bristol.
- BVI Seminar: Psychophysical probes into spatial vision: you ain’t seen nothin’ yet. - 10,03,2017
Tim Meese, School of Life and Health Sciences, Aston University.
Everyone knows what cosmologists do: they gaze out into the sky to see the secrets of what’s out there, matching observations with theory to understand how the universe came about. Visual psychophysicists are motivated by a similar sense of wonder; but the universe they want to understand lives inside the biological computer of the human brain. By using carefully designed visual stimuli in controlled laboratory environments, it turns out that this non-invasive research method can provide an incisive view of the visual mechanisms that lie within us, whether we are conscious of them or not! I shall show how psychophysics has been used to see into the very early stages of the visual processing stream, where the ‘atoms’ of vision are very different from the rich visual experience that they deliver when we open our eyes, and where competing theories have been pitched against one another, helping us to understand how we see.
Bio: Having started out as a British Telecommunications Engineer, Tim switched directions in the late 80s when he pursued a degree in Computer Science and Psychology at the University of Newcastle-upon-Tyne. He then did a PhD in the Psychology Department at the University of Bristol, held a postdoc position at The University of Birmingham, then moved to The University of Aston in 1996 where he is now Professor of Vision Science. His duties there include teaching Optometry students that we see with our brains—not our eyes. He has held funding from EPSRC, BBSRC, Leverhulm Trust and the Welcome Trust, is the Vice-Chairman of the Applied Vision Association, a chief editor of the sister journals Perception and i-Perception, the director of the Centre for Vision and Hearing Research and the director of the Aston Laboratory for Immersive Virtual Environments (ALIVE). Amongst other things, this new venture aims to use virtual reality to help bridge the gap between psychophysics and social psychology: visual perception provides the stage, and social perception the players upon it.
- BVI SEMINAR:New technologies for improving the representation of human vision - 20,01,2017
Robert Pepperell, Cardiff Metropolitan University
Abstract: What is the best way to represent the three-dimensional world we see on a two-dimensional surface? For several hundred years there was basically one answer to this question: use linear perspective. Linear perspective is a method of mapping rays of light onto an image plane in a way that corresponds to well-understood laws of geometry and optics. It now underpins nearly all existing imaging technology. But even the artists who first developed linear perspective understood that it had severe limitations when it comes to representing the wide angled, binocular vision most humans enjoy. As has been pointed out many times, linear perspectival images viewed under everyday conditions do not replicate our visual experience of the world very well. Over the last two centuries various alternative methods have been proposed, normally by artists, which are claimed to better represent human visual perception. These are usually based on subjective judgments about how things appear to human observers rather the objective behaviour of light. Such methods, while interesting historically, were of limited utility to image-makers as they were even more complicated to apply than the rules of linear perspective. However, the current state of computer graphics allows us to manipulate image data in powerful ways, and to design systems that incorporate some of the subjective properties of vision that it was not previously practical to model. In this talk I will outline the approach my lab is taking to designing new technologies for representing human vision that are an improvement on what can currently be achieved with existing linear perspective-based technology. This will include some discussion of the art historical background to our work, the scientific data we have gathered, and demonstrations of some the technological solutions we have developed.
Biography: Robert Pepperell PhD is Professor of Fine Art at Cardiff School of Art and leader of Fovolab at Cardiff Metropolitan University. As an artist he has exhibited at Ars Electronica, the Barbican Gallery, Glasgow Gallery of Modern Art, the ICA, and the Millennium Dome. As an academic he has published widely in the fields of art history, philosophy of mind, artificial intelligence, neuroscience, and perceptual psychology.
- BVI Seminar: Marked Point Processes for Object Detection and Tracking in High Resolution Images: Applications to Remote Sensing and Biology - 24,11,2016
Josiane Zerubia, INRIA, France
Abstract: In this talk, we combine the methods from probability theory and stochastic geometry to put forward new solutions to the multiple object detection and tracking problem in high resolution remotely sensed image sequences. First, we present a spatial marked point process model to detect a pre-defined class of objects based on their visual and geometric characteristics. Then, we extend this model to the temporal domain and create a framework based on spatio-temporal marked point process models to jointly detect and track multiple objects in image sequences. We propose the use of simple parametric shapes to describe the appearance of these objects. We build new, dedicated energy based models consisting of several terms that take into account both the image evidence and physical constraints such as object dynamics, track persistence and mutual exclusion. We construct a suitable optimization scheme that allows us to find strong local minima of the proposed highly non-convex energy. As the simulation of such models comes with a high computational cost, we turn our attention to the recent filter implementations for multiple objects tracking, which are known to be less computationally expensive. We propose a hybrid sampler by combining the Kalman filter with the standard Reversible Jump MCMC. High performance computing techniques are also used to increase the computational efficiency of our method. We provide an analysis of the proposed framework. This analysis yields a very good detection and tracking performance at the price of an increased complexity of the models. Tests have been conducted both on high resolution satellite and microscopy image sequences.
Biography: Josiane Zerubia has been a permanent research scientist at INRIA since 1989 and director of research since July 1995. She was head of the PASTIS remote sensing laboratory (INRIA Sophia-Antipolis) from mid-1995 to 1997 and of the Ariana research group (INRIA/CNRS/University of Nice), which worked on inverse problems in remote sensing and biological imaging, from 1998 to 2011. Since January 2012, she has been head of Ayin research group (INRIA-SAM) dedicated to models of spatio-temporal structure for high resolution image processing with a focus on remote sensing and skincare imaging. She has been professor at SUPAERO (ISAE) in Toulouse since 1999. Before that, she was with the Signal and Image Processing Institute of the University of Southern California (USC) in Los-Angeles as a postdoc. She also worked as a researcher for the LASSY (University of Nice/CNRS) from 1984 to 1988 and in the Research Laboratory of Hewlett Packard in France and in Palo-Alto (CA) from 1982 to 1984. She received the MSc degree from the Department of Electrical Engineering at ENSIEG, Grenoble, France in 1981, the Doctor of Engineering degree, her PhD and her ‘Habilitation’, in 1986, 1988, and 1994 respectively, all from the University of Nice Sophia-Antipolis, France. She is a Fellow of the IEEE (2003- ) and IEEE SP Society Distinguished Lecturer (2016-2017). She was a member of the IEEE IMDSP TC (SP Society) from 1997 till 2003, of the IEEE BISP TC (SP Society) from 2004 till 2012 and of the IVMSP TC (SP Society) from 2008 till 2013. She was associate editor of IEEE Trans. on IP from 1998 to 2002, area editor of IEEE Trans. on IP from 2003 to 2006, guest co-editor of a special issue of IEEE Trans. on PAMI in 2003, member of the editorial board of IJCV from 2004 till March 2013 and member-at-large of the Board of Governors of the IEEE SP Society from 2002 to 2004. She has also been a member of the editorial board of the French Society for Photogrammetry and Remote Sensing (SFPT) since 1998, of the Foundation and Trends in Signal Processing since 2007 and member-at-large of the Board of Governors of the SFPT since September 2014. She has been associate editor of the on-line resource « Earthzine » (IEEE CEO and GEOSS) since 2006. She was co-chair of two workshops on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR’01, Sophia Antipolis, France, and EMMCVPR’03, Lisbon, Portugal), co-chair of a workshop on Image Processing and Related Mathematical Fields (IPRM’02, Moscow, Russia), technical program chair of a workshop on Photogrammetry and Remote Sensing for Urban Areas (Marne La Vallée, France, 2003), co-chair of the special sessions at IEEE ICASSP 2006 (Toulouse, France) and IEEE ISBI 2008 (Paris, France), publicity chair of IEEE ICIP 2011 (Brussels, Belgium), tutorial co-chair of IEEE ICIP 2014 (Paris, France), general co-chair of the workshop Earthvision at IEEE CVPR 2015 (Boston, USA) and a member of the organizing committee and plenary talk co-chair of IEEE-EURASIP EUSIPCO 2015 (Nice, France). She also organized and chaired an international workshop on Stochastic Geometry and Big Data at Sophia Antipolis, France, in November 2015. She is part of the organizing committees of both GRETSI 2017 symposium (Juan les Pins, France) and ISPRS 2020 congress (Nice, France).
Her main research interest is in image processing using probabilistic models. She also works on parameter estimation, statistical learning and optimization techniques. - BVI Seminar: The effectiveness of camouflage; predator learning and new modelling approaches - 31,10,2016
Jolyon Troscianko, Exeter University
Abstract: Evading detection is crucial for the survival of many animals, and number of different means of achieving camouflage have been discovered. I will discuss my recent work investigating whether some types of camouflage are more easily learnt than others. If predators learn to find one type of prey more efficiently than others over successive encounters, this could create frequency dependent selection, and help to explain the diversity of camouflage strategies we see in nature. In addition, I will discuss some recent work on objectively modelling the conspicuousness of camouflaged prey. Such models are frequently used to measure levels of camouflage or visual difference, but their efficacy has rarely been tested and compared to alternatives. I found that a novel method for quantifying edge disruption was the best predictor of capture times; this technique could in turn help us to finally form a definition of disruptive camouflage, demonstrating the need for combining neurophysiology with behaviour and modelling approaches to understand how camouflage works.
Biography: I have a background in behavioural ecology and sensory ecology, asking questions such as how an animal’s cognition or appearance to other animals affects how they interact with their environment, and how this in turn affects their evolution (see my publications).I’m currently a postdoctoral research associate, working on a BBSRC grant to Martin Stevens and John Skelhorn, investigating how easily predators can learn and switch between different camouflage strategies. I’m testing this using a series of touch-screen experiments on humans and laboratory chickens, training them to find computer generated “moths” that are concealed with different types of camouflage. For more information visit: http://www.jolyon.co.uk/myresearch/
- BVI Seminar: Why do animals look and behave the way they do? - 11,05,2016
Karin Kjernsmo, CamoLab, School of Biological Sciences – University of Bristol
Abstract: The ongoing evolutionary arms race between predators that try to outsmart their prey, and prey which in turn tries to avoid getting eaten has resulted in an impressive variety of morphological and behavioural adaptations. The numerous ways animals can use their colour patterns to avoid getting eaten provided Wallace and Darwin with important examples for illustrating and defending their ideas of natural selection and adaptation. Eyespots are a prominent example of such colour patterns. Consisting of roughly concentric rings of contrasting colours, they have received their name because to humans they often resemble the vertebrate eye. Eyespots are common in many terrestrial taxa such as insects (particularly in the order Lepidoptera), birds and reptiles, and they are also widespread in many aquatic taxa such as molluscs, flatworms and fishes. Because of their salience and taxonomically wide occurrence, eyespots have intrigued naturalists and biologists for more than a century, but disentangling their adaptive and functional significance has proven to be quite a challenge.In this talk, I will present the results of our recent studies investigating the anti-predator utility of eyespots. The adaptive value and function of eyespots have received much attention and have been studied particularly in terrestrial systems, but far less is known about them in aquatic systems. This is despite the fact that all types of protective coloration found in terrestrial environments also exist in the aquatic environment and have probably initially evolved there. In order to study the functions and adaptive value of eyespots in aquatic environments, I have developed an “aquatic novel world”, where I use naïve, visually hunting fish as predators, and artificial prey. As you will see, eyespots provide numerous ways for prey to manipulate predator behaviour and that way decrease predation risk.
Biography: My education began in Sweden, where I obtained a BSc and MSc degree in Zoology at the University of Gothenburg. I then moved to Turku, Finland, where I completed my PhD project entitled “Anti-predator Adaptations in Aquatic Environments” (supervised by Dr. Sami Merilaita, åbo Akademi Univ. and Prof. Jörgen I. Johnson, Univ. of Gothenburg) at Åbo Akademi University. In late July 2015, I moved from Turku to Bristol and started working in the Camo Lab as a Research Associate, on a 3-year BBSRC funded project investigating whether iridescence can be deceptive (lead by Prof. Innes Cuthill, Dr. Nick Scott-Samuel & Dr. Heather Whitney).
- Cuttlefish vision in a 3-D world - 22,04,2016
Prof. Daniel Osorio, School of Life Sciences, University of Sussex
Abstract: Cuttlefish are a type of mollusc, which has evolved excellent vision independently of vertebrates, and can change their appearance to conceal themselves from predatory fish. The flexibility and subtlety of this adaptive camouflage gives remarkable insight into the vision in a non-human species, and I will talk today about how they respond to patterns of light and shadow to select their coloration pattern.
Biography: Daniel Osorio is at the University of Sussex, where he has studied colour vision and camouflage is a wide range of animals, but never humans.