Object Modelling From Sparse And Misaligned 3D and 4D Data

Object modelling from 3D and 4D sparse and misaligned data has important applications in medical imaging, where visualising and characterising the shape of, e.g., an organ or tumor, is often needed to establish a diagnosis or to plan surgery. Two common issues in medical imaging are the presence of large gaps between the 2D image slices which make a dataset, and misalignments between these slices, due to patient’s movements between their respective acquisitions. These gaps and misalignments make the automatic analysis of the data particularly challenging. In particular, they require interpolation and registration in order to recover a complete shape of the object. This work focuses on the integrated registration, segmentation and interpolation of such sparse and misaligned data. We developed a framework which is flexible enough to model objects of various shapes, from data having arbitrary spatial configuration and from a variety of imaging modalities (e.g. CT-scan, MRI).

ISISD: Integrated Segmentation and Interpolation of Sparse Data

We present a new, general purpose, level set framework which can handle sparse data, by simultaneously segmenting the data and interpolating automatically its gaps. In this new framework, the level set implicit function is interpolated by Radial Basis Functions (RBFs), and its interface can propagate in a sparse volume, using data information where available, and RBF based interpolation of its speeds in the gaps. Any segmentation criteria may be used, thus allowing the framework to process any imaging modalities. Different modalities can be handled simultaneously due to the method interpolating the level set contour rather than the image intensities. This new level set framework benefits from a better robustness to noise in the images, and can segment sparse volumes by integrating the shape of the objects in the gaps.

More details and results may be found here.

The method is described in:

  • Adeline Paiement, Majid Mirmehdi, Xianghua Xie, Mark Hamilton, Integrated Segmentation and Interpolation of Sparse Data. IEEE Transactions on Image Processing, Vol. 23, Issue 1, pp. 110-125, 2014.

IReSISD: Integrated Registration, Segmentation and Interpolation of Sparse Data

A new registration method, Registration_SA_LAalso based on level set, has been developed and integrated to the previous RBF interpolated level set framework. Thus, the new framework can correct misalignments in the data, at the same time as it segments and interpolates it. The integration of all three processes of registration, segmentation and interpolation into a same framework allows them to benefit from each others. Notably registration exploits the shape information provided by the segmentation stage, in order to be robust to local minima and to limited intersections between the images of a dataset.

More details and results may be found here.

The method is described in:

  • Adeline Paiement, Majid Mirmehdi, Xianghua Xie, Mark Hamilton, Registration and Modeling from Spaced and Misaligned Image Volumes. Submitted to IEEE Transactions on Image Processing.

The tables in the article are reported in the graphs below:







Published Work

  1. Adeline Paiement, Majid Mirmehdi, Xianghua Xie, Mark Hamilton, Integrated Segmentation and Interpolation of Sparse DataIEEE Transactions on Image Processing, Vol. 23, Issue 1, pp. 110-125, 2014.
  2. Adeline Paiement, Majid Mirmehdi, Xianghua Xie, Mark Hamilton, Simultaneous level set interpolation and segmentation of short- and long-axis MRI. Proceedings of Medical Image Understanding and Analysis (MIUA) 2010, pp. 267–272. July 2010. – PDF, 173 Kbytes.

Download Software

The latest version of the code for ISISD and IReSISD can be downloaded here (Version 1.3).

Earlier versions:

Exploratory analysis in phMRI

Working in collaboration with the Psychopharmacology Unit, we are investigating and developing exploratory, data-driven methods for the analysis of pharmacological MRI (phMRI). The data produced in these studies can be thought of as 3D movies of the brain, where we seek to discover both the temporal and spatial effects of a drug, especially in cases where the expected neural response is not well established.



Marie Skłodowska-Curie Actions : PROVISION

Creating a ‘Visually’ Better TomorrowPROVISION team photo

PROVISION is a network of leading academic and industrial organisations in Europe comprising of international researchers working on the problems plaguing most video coding technologies of the day. The ultimate goal is to make noteworthy technical advances and further improvements to the existing state-of-the-art techniques of compression video material.

The project shall not only aim to enhance broadcast and on-demand video material, but also produce a new generation of scientists equipped with research and soft skills needed by industry, academia and society by large. In line with the principles laid down by Marie Skłodowska-Curie actions of the European Commission, PROVISION is a great example of an ensemble of researchers with varied geographical and academic backgrounds all channelling their joint effort towards creating a technologically, or more specifically a ‘visually’ better tomorrow

Provision website, Provision facebook page

In-situ interactive model building

intmodThe current system allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.


Situation awareness through hand behaviour analysis

sitawareActivity recognition and event classification are of prime relevance to any intelligent system designed to assist on the move. There have been several systems aimed at the capturing of signals from a wearable computer with the aim of establishing a relationship between what is being perceived now and what should be happening. Assisting people is indeed one of the main championed potentials of wearable sensing and therefore of significant research interest.

Our work currently focuses on higher-level activity recognition that processes very low resolution motion images (160×120 pixels) to classify user manipulation activity. For this work, we base our test environment on supervised learning of the user’s behaviour from video sequences. The system observes interaction between the user’s hands and various objects, in various locations of the environment, from a wide angle shoulder-worn camera. The location and object being interacted with are indirectly deduced on the fly from the manipulation motions. Using this low-level visual information user activity is classified as one from a set of previously learned classes.