University of Bristol

VI-Lab PhD Research Opportunities

Funded Opportunities

VI-Lab currently has fully funded PhD opportunities in the following areas:

_________________________________________________________________________

Title: Development of super resolution and three-dimensional intra-coronary intra-vascular ultrasound imaging

Supervisors: Professor Alin Achim and Dr Kalpa De Silva (Bristol Heart Institute)

Funding: CSC Scholarships / self-funded

Deadline: Open until filled

Start date: Flexible

Brief description: Coronary artery disease (CAD) remains one of the most prevalent causes of mortality in the World. Percutaneous coronary intervention (PCI) has become the commonest revascularization modality, superseding surgical bypass grafting, in treating obstructive CAD. Historically PCI has been performed with angiographic imaging of the coronary vessel only, with the intrinsic limitations of this two-dimensional X-ray imaging technique introducing an error in determining vessel size and lesion complexity, in complex, dynamic, three-dimensional coronary structures. This had led to an increase in the utilisation of intra-vascular ultrasound (IVUS) to guide and allow more precise stent implantation.

Routine use of IVUS-guided PCI has been shown to improve both procedural and long term outcomes. However, clinically applicable IVUS imaging is currently limited to 20-60Hz frequency acquisition, in a single, longitudinal plane only. This project aims to develop improved super resolution intra-coronary IVUS imaging and the development of three-dimensional reconstruction capabilities, solving current limitations in both temporal and spatial resolution, with potential translation to future clinical practice. From an algorithmic development perspective, this project will develop tools that generically fall under the computational imaging umbrella and sparsity driven methods.

_________________________________________________________________________

Title: Machine learning and Computational Imaging for disease classification in intra-vascular ultrasound (IVUS) imagery

Supervisors: Professor Alin Achim and Dr Kalpa De Silva (Bristol Heart Institute)

Funding: CSC Scholarships / self-funded

Deadline: Open until filled

Start date: Flexible

Brief description: Coronary artery disease (CAD) remains one of the most prevalent causes of mortality in the World. Percutaneous coronary intervention (PCI) has become the commonest revascularization modality, superseding surgical bypass grafting, in treating obstructive CAD. PCI is nowadays routinely performed using IVUS for better stent implantation. Routine use of IVUS-guided PCI has been shown to improve both procedural and long term outcomes. Stent implantation can however lead sometimes to the formation of blood clots, which is referred to as stent thrombosis.

This project will hence develop machine learning and computational imaging methods for the detection of non-optimal stent placement, but also for detection of CAD in pre-clinical patients. Investigations into a number of inverse problems, possibly by means of non-convex optimization algorithms in conjunction with deep learning, will form a major part of this work.

_________________________________________________________________________

Title: Posing Image Fusion as an Inverse Problem: Application to Biomedical Image Computing

Supervisor: Professor Alin Achim and Dr Lindsay Nicholson (Bristol Medical School)

Funding: CSC Scholarships / self-funded

Deadline: Open until filled

Start date: Flexible

 Brief description: Image fusion is the process of combining two or more images into a single image by retaining the important features from each input image. Several image fusion techniques exist but only few of them address the problem in a probabilistic framework. This project will involve the development of a model-based approach to image fusion by posing the problem as an inverse one. The method should be implemented in a transform domain, by either using a fixed dictionary or by designing one using convolutional sparse coding approaches. It should also take into account the true statistics of the source images.

Depending on the candidate’s interests, the application envisaged can be either in ophthalmology for the fusion of optical coherence tomography (OCT) and fluorescence imaging, or in urology for the classification of prostate disease by fusing ultrasound and magnetic resonance images.

_______________________________________________________________________

Title: Automatic B-line Quantification in Lung Ultrasound Images using Machine Learning

Supervisors: Professor Alin Achim and Professor Moin Saleem (Bristol Medical School)

Funding: CSC Scholarships / self-funded

Deadline: Open until filled

Start date: Flexible

 Brief description: In this project we will consider the detection and identification of “B-lines” (bright lines that originate at the pleural line and extend to the edge of the sonographic window) in children receiving dialysis. Indeed, children with end stage kidney disease treated with dialysis have significant cardioavascular complications; they are the leading cause of death in this group of children. Total body fluid overload is a major factor that contributes to cardiovascular complications. Assessment of fluid overload remains challenging, and currently relies on the treating phyicians’ assessment and judgment, which is variable.  Evidence from adult practice suggests that lung ultrasound can be used as a sensitive measure of fluid overload in adult dialysis patients. Specifically, the presence or absence of B-lines on thoracic sonography can be used as a marker of interstitial fluid.

In this project, advanced methods for simultaneously enhancing ultrasound image quality and for linear feature extraction will be developed. A complete image processing and analysis workflow will be developed, which will take real ultrasound images of patients lungs, enhance their quality (de-speckle, deconvolution) for facilitating information extraction, detect and classify B-lines, and finally score them by automatically assessing their severity. From an algorithmic development perspective, this project will develop tools that generically fall under the computational imaging umbrella by solving a number of ill-posed inverse problems. Convolutional sparse coding and deep learning will also play an important part of this research.

_________________________________________________________________________

Title: Video understanding with sub-threshold processors for energy efficient IoT

 Supervisor: Jose Nunez-Yanez/Dave Bull

Funding: CSC Scholarships / self-funded

Deadline: Open until filled

Start date: Flexible

Brief description: Near and sub-threshold processors run from voltage supplies as low as 0.25V which is significantly lower than the transistor’s switching voltage. This enables them to obtain much better energy efficiency and this technology could be a breakthrough in IoT applications such as smart homes, e-health and autonomous cars, enabling sophisticated processing using energy harvested from the environment.

Recent research has shown that a ‘near threshold’ circuit operating at 0.5V can reduce dynamic power consumption by up to 13 times whilst a ‘sub threshold’ circuit operating at 0.3V, could reduce dynamic power by up to 36 times. In this research we will investigate the suitability of state-of-the-art sub-threshold processors from companies such as Ambiq and Etacompute. These processors use ARM instructions sets but differ on the selected instruction set architecture, the sub-threshold technology and the presence of signal processing accelerators.

The application area will be video and signal analysis applications based on recurrent and convolutional neural networks with fully binarized  weights and activations.  Binarized neural networks have been shown to be a good match for ultralow power embedded systems and recent research has shown that they can perform within 1% of the accuracy of full floating point precision networks by increasing the depth and the number of filters per layer. How to train, optimize and map these networks to a sub-threshold processor will be investigated together with novel ways to predict the energy required for a computation before it takes place and how to compute when the energy supply is insufficient to guarantee complete correctness.

_________________________________________________________________________

Title: Optimised acquisition and coding for immersive formats based on visual scene analysis

Supervisor: Professor David Bull

Funding: EPSRC iCASE award with BBC Research and Development

Deadline: Open until filled

Start Date: October 2018 (latest)

Description:  There is a hunger for new, more immersive video content (UHDTV, 360 etc) from users, producers and network operators.  Efforts in this respect have focused on extending the video parameter space with greater dynamic range, spatial resolution, temporal resolution, colour gamut, interactivity and display size / format. There is however, a very limited understanding of the interactions between these parameters and their relationship to content statistics, visual immersion and delivery methods.  The way we represent these immersive video formats is thus key in ensuring that content is delivered at an appropriate quality, which retains the intended immersive properties of the format, while retaining compatibility with the bandwidth and variable nature of the transmission channel. Major research innovations are needed to solve this problem.

The research challenges to be addressed are based on the hypothesis that, by exploiting the perceptual properties of the Human Visual System, and its content dependent performance, we can obtain step changes in visual engagement while also managing bit rate. We must therefore: i) understand the perceptual relationships between video parameters and content type; and ii) develop new visual content representations that adapt to content statistics and their immersive requirements.  The solution to this problem will focus around exploitation of machine learning methods to classify scene content and relate this to the extended video parameter space.

_________________________________________________________________________

Title: Understanding and Measuring Visual Immersion

Supervisor: Professor Iain Gilchrist, Professor David Bull

Funding: EPSRC iCASE award with BBC Research and Development

Deadline: Open until filled

Start Date: October 2018

Description: Immersion is a psychological phenomenon that plays a large part in determining our enjoyment when viewing mediated content. New media formats for cinema and broadcast, such as High-Dynamic Range (HDR), High Frame Rate (HFR) video, wider colour gamuts, 360 content and VR promise new and powerful ways to deliver performance in effective, exciting and novel ways. These technologies introduce new mediation processes between artistic performance and audience appreciation, so the central question is the effect that this technology has on immersion. In this project we will for the first time enable the development and deployment of non-invasive measures of individual or collective immersion.  This will not only help us to understand the immersive properties of the narrative but also provide a dynamic means of informing editorial decision making and assessing the incremental value of technology over narrative. We will investigate both personalised immersion measures based on psychophysics and physiological measurements, and develop instrumentation for measuring collective measures of immersion – using this to evaluate both conventional and new formats.

_________________________________________________________________________

Title: Automated Volumetrics for Stroke using Deep Learning

Supervisors:  Prof. Majid Mirmehdi,  Dr Phil Clatworthy

Funding:   Funding will be available for a successful applicant.

Deadline: January 22nd (for all candidates except Chinese applicants)

January 15th (for Chinese applicants)

Start Date: Sept 2018 – but negotiable for a later start

Description: Stroke is a devastating illness in which clinical decisions are often time critical. Ischaemic stroke is death of a part of the brain due to blockage of the artery supplying that area of the brain. There are many circumstances in which rapid and reliable measurement of the volume of an ischaemic stroke is likely to be extremely useful in making clinical decisions, such as determining the relative risk and benefit of urgent treatments, such as intravenous thrombolysis (“clot-busting”) and intra-arterial thrombectomy (mechanical clot removal). This project will involve developing deep learning methods with the objective of making key measurements on CT and MRI scans of the brain to allow time critical decisions with more confidence.

_________________________________________________________________________

Title: PhD in Computer Vision

Supervisor:   Prof. Majid Mirmehdi

Funding:     Various sources such as DTP/UOB/CSC or self-funded

Deadline:   January 15th for CSC applicants, January 22nd for all non-Chinese applicants

Start Date:   October 2018

Description:   I am open to supervising students who are interested in Computer Vision (including the application of Machine Learning techniques). I can propose various projects but also happy to hear from candidates who wish to propose their own ideas. General areas of interest include Human Motion and Action Analysis, Scene Understanding,  Heathcare Monitoring, Autonomous Vehicles, Vision for Robots, and Medical Image Analysis.

_________________________________________________________________________

Title: Low-latency machine learning with neural networks in multi-modal imaging

Supervisor: Dr Jose Nunez-Yanez and Professor Dave Bull

Funding: CSC Scholarships / self-funded

Deadline: Open until filled

Start date: Flexible

Description: This project aims to investigate a low-latency camera and hardware set-up that will be able to capture high resolution videos, extract regions of interest and perform data enhancement and fusion on different spectral bands. The outputs of this image pre-processing will then drive a FPGA-based deep convolutional neural network to perform training and inference in real-time. The FPGA (Field Programmable Gate Array) accelerator will use very low precision arithmetic and dataflow techniques to support rates in the range of thousands of frames per second. Considered image modalities will include visible light, infrared, ultrasound and others. The applications of this low-latency technology include medical diagnosis, autonomous navigation, haptic feedback among others. Many of these applications require response times in the order of milliseconds and for this reason high-performance hardware acceleration is a requirement.

_________________________________________________________________________

Unfunded Research Areas of Interest

VI-Lab academic staff will accept PhD applications from self funded students or scholarship applicants in the following areas. Please note that for overseas scholarships, all applications must be complete by December 31 2017.

_________________________________________________________________________

Title: Mitigating the effects of atmospheric distortions on surveillance video

Supervisor: Professor David Bull, Dr Alin Achim

Brief description: The influence of atmospheric turbulence on acquired surveillance imagery makes image interpretation and scene analysis extremely difficult and reduces the effectiveness of conventional approaches for detecting, classifying or tracking targets in the scene. This project will address this issue using supervised machine learning; the turbulent distortion, camera motion and target trajectories will be modelled using deep recurrent convolutional neural networks. These trained networks will improve image or video quality and will also support real-time applications.

_________________________________________________________________________

 Title: Perceptual image and video denoising

Supervisor: Professor David Bull, Dr Alin Achim, Professor Iain Gilchrist.

Brief Description: Noise is a primary limiting factor in imaging systems, influencing both perceived visual quality and task-related performance. It impacts applications from video streaming and autonomous locomotion to medical and scientific measurement. Whether caused by sensor limitations, environmental influences, or transmission loss, noise mitigation is essential – a surveillance threat obscured by noise or a missed medical anomaly, could be a matter of life and death. Understanding human perception of noise and our ability to `see through it’ is thus key to developing optimised denoising methods that will transform machine and human performance and enable more compelling and informative visual content.

_________________________________________________________________________

Title: Visual aids for the visually impaired

Supervisor: Professor David Bull, Professor Iain Gilchrist, Dr. J. Burn

Brief description: This research will investigate how human perception and low-level features drive decisions on foot placement and path selection during traversal of complex terrain. The outcome of this analysis will be employed to develop a novel framework for assisting visually impaired locomotion, encompassing short range (footstep prediction and classification for safe mobility) and long range (scene understanding and detection for path planning and awareness). This will be inspired by the human vision system, working in real-time with smart glasses and applied directly to people with visual impairment to improve their well-being.  The project will build on previous results of the applicants in terms of feature analysis, eye tracking and terrain classification for robotics.

_________________________________________________________________________

Title: Partial reference visual quality metrics

Supervisor: Professor David Bull

Brief description: It is frequently important to judge the quality of a compressed video when no reference is available (for example after transmission over a distorting channel). In such cases it is imperative to assess the quality based on the received content, possibly in conjunction with a small amount of side information. This project will investigate efficient and robust means of achieving this.

_________________________________________________________________________

Title: Cognitive video compression

Supervisor: Professor David Bull, Dr. Roland Baddeley

Brief description: There is currently significant activity linked to the development of new standards for the representation and coding of higher spatio-temporal resolutions, HDR content and 360 degree immersive formats.  There are a number of key challenges associated with these emerging immersive formats (particularly 360 formats) where ‘immersion-breaking’ artefacts due to acquisition, compression and display are very important and must be avoided. Hence there must be a high emphasis on compression performance, particularly with 6DoF 360 formats where raw bit rates can be extreme. Delivery bit rates demand compression ratios of many 1000s:1. Therefore new coding and delivery techniques are needed to deliver content at manageable bit rates while ensuring that the immersive properties of the format are preserved.

_________________________________________________________________________

Title: Cell imaging and analytics

Supervisor: Dr. Alin Achim, Professor Paul Verkade, Professor David Bull

Brief description:  This project will investigate how autofluorescence can be exploited in conjunction with reflected contrast microscopic imaging in a bioreactor, with application to biopharmaceutical manufacture and regenerative medicines. It will address the problems of image enhancement dealing with opacity, blur, turbidity and geometric distortions and will extract image features that correlate with cell physiological state. It will then investigate means of cell state classification.