Deep Photometric Stereo

This project addresses the problem of photometric stereo for non-Lambertian surfaces. Existing approaches often adopt simplified reflectance models to make the problem more tractable, but this greatly hinders their applications on real-world objects. In this project, we propose a deep fully convolutional network, called PS-FCN, that takes an arbitrary number of images of a static object captured under different light directions with a fixed camera as input, and predicts a normal map of the object in a fast feed-forward pass. Unlike the recently proposed learning based method, PS-FCN does not require a pre-defined set of light directions during training and testing, and can handle multiple images and light directions in an order-agnostic manner. Although we train PS-FCN on synthetic data, it can generalize well to real datasets. We further show that PS-FCN can be easily extended to handle the problem of uncalibrated photometric stereo. Extensive experiments on public real datasets show that PS-FCN outperforms existing approaches in calibrated photometric stereo, and promising results are achieved in uncalibrated scenario, clearly demonstrating its effectiveness.


[1] Guanying Chen, Kai Han, Boxin Shi, Yasuyuki Matsushita, Kwan-Yee K. Wong
Deep Photometric Stereo for Non-Lambertian Surfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020. [project page] [code]

[2] Guanying Chen, Kai Han, Boxin Shi, Yasuyuki Matsushita, Kwan-Yee K. Wong
Self-calibrating Deep Photometric Stereo Networks
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
Oral presentation [project page] [code]

[3] Guanying Chen, Kai Han, Kwan-Yee K. Wong
PS-FCN: A Flexible Learning Framework for Photometric Stereo
European Conference on Computer Vision (ECCV), 2018. [project page] [code]

Discovering and Learning Novel Visual Categories

We tackle the problem of discovering novel classes in an image collection given labelled examples of other classes. This setting is similar to semi-supervised learning, but significantly harder because there are no labelled examples for the new classes. The challenge, then, is to leverage the information contained in the labelled images in order to learn a general-purpose clustering model and use the latter to identify the new classes in the unlabelled data. In this work we address this problem by combining three ideas: (1) we suggest that the common approach of bootstrapping an image representation using the labeled data only introduces an unwanted bias, and that this can be avoided by using self-supervised learning to train the representation from scratch on the union of labelled and unlabelled data; (2) we use rank statistics to transfer the model’s knowledge of the labelled classes to the problem of clustering the unlabelled images; and, (3) we train the data representation by optimizing a joint objective function on the labelled and unlabelled subsets of the data, improving both the supervised classification of the labelled data, and the clustering of the unlabelled data. We evaluate our approach on standard classification benchmarks and outperform current methods for novel category discovery by a significant margin.


[1] Kai Han*, Sylvestre-Alvise Rebuffi*, Sebastien Ehrhardt*, Andrea Vedaldi, Andrew Zisserman
Automatically Discovering and Learning New Visual Categories with Ranking Statistics
International Conference on Learning Representations (ICLR), 2020. (* indicates equal contribution.) [project page] [code]

[2] Kai Han, Andrea Vedaldi, Andrew Zisserman
Learning to Discover Novel Visual Categories via Deep Transfer Clustering
International Conference on Computer Vision (ICCV), 2019. [project page] [code]

Learning Dense Visual Correspondences

In this project, we tackle the task of establishing dense visual correspondences between images containing objects of the same category. This is a challenging task due to large intra-class variations and a lack of dense pixel level annotations. We propose a convolutional neural network architecture, called adaptive neighbourhood consensus network (ANC-Net), that can be trained end-to-end with sparse key-point annotations, to handle this challenge. At the core of ANC-Net is our proposed non-isotropic 4D convolution kernel, which forms the building block for the adaptive neighbourhood consensus module for robust matching. We also introduce a simple and efficient multi-scale self-similarity module in ANC-Net to make the learned feature robust to intra-class variations. Furthermore, we propose a novel orthogonal loss that can enforce the one-to-one matching constraint. We thoroughly evaluate the effectiveness of our method on various benchmarks, where it substantially outperforms state-of-the-art methods.


[1] Kai Han, Rafael S. Rezende, Bumsub Ham, Kwan-Yee K. Wong, Minsu Cho, Cordelia Schmid, Jean Ponce
SCNet: Learning Semantic Correspondence
International Conference on Computer Vision (ICCV), 2017. [project page] [code]

[2] Shuda Li*, Kai Han*, Theo W. Costain, Henry Howard-Jenkins, Victor Prisacariu
Correspondence Networks with Adaptive Neighbourhood Consensus
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. (* indicates equal contribution.) [project page] [code]

[3] Xinghui Li, Kai Han, Shuda Li, Victor Prisacariu
Dual-Resolution Correspondence Networks
Conference on Neural Information Processing Systems (NeurIPS), 2020. [project page] [code]

3D Semantic Scene Completion

As a voxel-wise labeling task, semantic scene completion (SSC) tries to simultaneously infer the occupancy and semantic labels for a scene from a single depth and/or RGB image. The key challenge for SSC is how to effectively take advantage of the 3D context to model various objects or stuffs with severe variations in shapes, layouts and visibility. To handle such variations, we propose a novel module called anisotropic convolution, which properties with flexibility and power impossible for the competing methods such as standard 3D convolution and some of its variations. In contrast to the standard 3D convolution that is limited to afixed 3D receptive field, our module is capable of modeling the dimensional anisotropy voxel-wisely. The basic idea is to enable anisotropic 3D receptive field by decomposing a 3D convolution into three consecutive 1D convolutions, and the kernel size for each such 1D convolution is adaptively determined on the fly. By stacking multiple such anisotropic convolution modules, the voxel-wise modeling capability can be further enhanced while maintaining a controllable amount of model parameters. Extensive experiments on two SSC benchmarks, NYU-Depth-v2 and NYUCAD, show the superior performance of the proposed method.


Jie Li, Kai Han, Peng Wang, Yu Liu, Xia Yuan
Anisotropic Convolutional Networks for 3D Semantic Scene Completion
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [PDF] [CODE]

3D Reconstruction of Transparent Objects

This project addresses the problem of reconstructing the surface shape of transparent objects. The difficulty of this problem originates from the viewpoint dependent appearance of a transparent object, which quickly makes reconstruction methods tailored for diffuse surfaces fail disgracefully. In this project, we introduce a fixed viewpoint approach to dense surface reconstruction of transparent objects based on refraction of light. We present a simple setup that allows us to alter the incident light paths before light rays enter the object by immersing the object partially in a liquid, and develop a method for recovering the object surface through reconstructing and triangulating such incident light paths. Our proposed approach does not need to model the complex interactions of light as it travels through the object, neither does it assume any parametric form for the object shape nor the exact number of refractions and reflections taken place along the light paths. It can therefore handle transparent objects with a relatively complex shape and structure, with unknown and inhomogeneous refractive index. We also show that for thin transparent objects, our proposed acquisition setup can be further simplified by adopting a single refraction approximation. Experimental results on both synthetic and real data demonstrate the feasibility and accuracy of our proposed approach.


[1] Kai Han, Kwan-Yee K. Wong, Miaomiao Liu
Dense Reconstruction of Transparent Objects by Altering Incident Light Paths Through Refraction
International Journal of Computer Vision (IJCV), 2018. [DOI]

[2] Kai Han, Kwan-Yee K. Wong, Miaomiao Liu
A Fixed Viewpoint Approach for Dense Reconstruction of Transparent Objects
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [PDF]

Learning Transparent Object Matting

This project addresses the problem of transparent object matting. Existing image matting approaches for transparent objects often require tedious capturing procedures and long processing time, which limit their practical use. In this project, we first formulate transparent object matting as a refractive flow estimation problem. We then propose a deep learning framework, called TOM-Net, for learning the refractive flow. Our framework comprises two parts, namely a multi-scale encoder-decoder network for producing a coarse prediction, and a residual network for refinement. At test time, TOM-Net takes a single image as input, and outputs a matte (consisting of an object mask, an attenuation mask and a refractive flow field) in a fast feed-forward pass. As no off-the-shelf dataset is available for transparent object matting, we create a large-scale synthetic dataset consisting of 158K images of transparent objects rendered in front of images sampled from the Microsoft COCO dataset. We also collect a real dataset consisting of 876 samples using 14 transparent objects and 60 background images. Promising experimental results have been achieved on both synthetic and real data, which clearly demonstrate the effectiveness of our approach.


[1] Guanying Chen*, Kai Han*, Kwan-Yee K. Wong
Learning Transparent Object Matting
International Journal of Computer Vision (IJCV), 2019. (* indicates equal contribution.)

[2] Guanying Chen*, Kai Han*, Kwan-Yee K. Wong
TOM-Net: Learning Transparent Object Matting from a Single Image
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. (* indicates equal contribution.)
Spotlight presentation

Further information

Papers and code are available here.