ICME2020 Grand Challenge: Encoding in the Dark

source: El Fuente test sequence, Netflix

Sponsors

The awards will be sponsored by Facebook and Netflix.

 

Low light scenes often come with acquisition noise, which not only disturbs the viewers, but it also makes video compression challenging. These types of videos are often encountered in cinema as a result of an artistic perspective or the nature of a scene. Other examples include shots of wildlife (e.g. Mobula rays at night in Blue Planet II), concerts and shows, surveillance camera footage and more. Inspired by all the above, we are organising a challenge on encoding low-light captured videos. This challenge intends to identify technology that improves the perceptual quality of compressed low-light videos beyond the current state of the art performance of the most recent coding standards, such as HEVC, AV1, VP9, VVC, etc. Moreover, this will offer a good opportunity for both experts in the fields of video coding and image enhancement to address this problem. A series of subjective tests will be part of the evaluation, the results of which can be used in a study of the tradeoff between artistic direction and the viewers’ preferences, such as mystery movies and some investigation scenes in the film.

Participants will be requested to deliver bitstreams with pre-defined maximum target rates for a given set of sequences, a short report describing their contribution and a software executable for running the proposed methodology and then can reconstruct the decoded videos by the given timeline. Participants are also encouraged to submit a paper for publication in the proceedings, and the best performers shall be prepared to present a summary of the underlying technology during the ICME session. The organisers will cross-validate and perform subjective tests to rank participant contributions.

Please, find here:

 

Important Dates:

    • Expression of interest to participate in the Challenge: 29/11/2019 10/12/2019
    • Availability of test sequences for participants: 01/12/2019 upon registration
    • Availability of anchor bitstreams and software package: 15/12/2019
    • Submission of encoded material: 13/03/2020 27/03/2020
    • Submission of Grand Challenge Papers: 13/03/2020 27/03/2020

Host: The challenge will be organised by Dr. Nantheera Anantrasirichai, Dr. Paul Hill, Dr. Angeliki Katsenou, Ms Alexandra Malyugina, and Dr. Fan Zhang, Visual Information Lab, University of Bristol, UK.

Contact: alex.malyugina@bristol.ac.uk

Computer Assisted Analysis of Retinal OCT Imaging

Texture-preserving image enhancement for Optical Coherence Tomography

This project developed novel image enhancement algorithms for retinal optical coherence tomography (OCT). These images contain a large amount of speckle causing them to be grainy and of very low contrast. To make these images valuable for clinical interpretation, our method offers speckle removal, while preserving useful information contained in each retinal layer starts with multi-scale despeckling based on a dual-tree complex wavelet transform (DT-CWT). The OCT image is further enhanced through a smoothing process that uses a novel adaptive-weighted bilateral filter (AWBF). This offers the desirable property of preserving texture within the OCT image layers. The enhanced OCT image is then segmented to extract inner retinal layers that contain useful information for eye research. Our layer segmentation technique is also performed in DT-CWT domain. Finally we also developed an OCT/fundus image registration algorithm which is helpful when two modalities are used together for diagnosis and for information fusion.

Figure below shows B-scans of retinal OCT images at ONH (top) and macula (bottom). Left: raw OCT images show grainy texture. Middle: despeckled images using with Cauchy Model*. Right: enhanced images using AWBF. [CODE] [PDF]

compareEnhOCT

Texture analysis on Ocular imaging for Glaucoma disease regression

The project analysed texture in the OCT image layers on retinal disease glaucoma. An automated texture classification method for glaucoma detection has been developed. Methodology for classification and feature extraction based on robust principle component analysis of texture descriptors was established. Also, the technique using multi-modal information fusion which incorporates data from visual field measurements with OCT and retinal fundus photography was developed. [PDF]

References