3D Semantic Scene Completion

As a voxel-wise labeling task, semantic scene completion (SSC) tries to simultaneously infer the occupancy and semantic labels for a scene from a single depth and/or RGB image. The key challenge for SSC is how to effectively take advantage of the 3D context to model various objects or stuffs with severe variations in shapes, layouts and visibility. To handle such variations, we propose a novel module called anisotropic convolution, which properties with flexibility and power impossible for the competing methods such as standard 3D convolution and some of its variations. In contrast to the standard 3D convolution that is limited to afixed 3D receptive field, our module is capable of modeling the dimensional anisotropy voxel-wisely. The basic idea is to enable anisotropic 3D receptive field by decomposing a 3D convolution into three consecutive 1D convolutions, and the kernel size for each such 1D convolution is adaptively determined on the fly. By stacking multiple such anisotropic convolution modules, the voxel-wise modeling capability can be further enhanced while maintaining a controllable amount of model parameters. Extensive experiments on two SSC benchmarks, NYU-Depth-v2 and NYUCAD, show the superior performance of the proposed method.

Publication

Jie Li, Kai Han, Peng Wang, Yu Liu, Xia Yuan
Anisotropic Convolutional Networks for 3D Semantic Scene Completion
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [PDF] [CODE]

3D Reconstruction of Transparent Objects

This project addresses the problem of reconstructing the surface shape of transparent objects. The difficulty of this problem originates from the viewpoint dependent appearance of a transparent object, which quickly makes reconstruction methods tailored for diffuse surfaces fail disgracefully. In this project, we introduce a fixed viewpoint approach to dense surface reconstruction of transparent objects based on refraction of light. We present a simple setup that allows us to alter the incident light paths before light rays enter the object by immersing the object partially in a liquid, and develop a method for recovering the object surface through reconstructing and triangulating such incident light paths. Our proposed approach does not need to model the complex interactions of light as it travels through the object, neither does it assume any parametric form for the object shape nor the exact number of refractions and reflections taken place along the light paths. It can therefore handle transparent objects with a relatively complex shape and structure, with unknown and inhomogeneous refractive index. We also show that for thin transparent objects, our proposed acquisition setup can be further simplified by adopting a single refraction approximation. Experimental results on both synthetic and real data demonstrate the feasibility and accuracy of our proposed approach.

Publications

[1] Kai Han, Kwan-Yee K. Wong, Miaomiao Liu
Dense Reconstruction of Transparent Objects by Altering Incident Light Paths Through Refraction
International Journal of Computer Vision (IJCV), 2018. [DOI]

[2] Kai Han, Kwan-Yee K. Wong, Miaomiao Liu
A Fixed Viewpoint Approach for Dense Reconstruction of Transparent Objects
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [PDF]

Learning Transparent Object Matting

This project addresses the problem of transparent object matting. Existing image matting approaches for transparent objects often require tedious capturing procedures and long processing time, which limit their practical use. In this project, we first formulate transparent object matting as a refractive flow estimation problem. We then propose a deep learning framework, called TOM-Net, for learning the refractive flow. Our framework comprises two parts, namely a multi-scale encoder-decoder network for producing a coarse prediction, and a residual network for refinement. At test time, TOM-Net takes a single image as input, and outputs a matte (consisting of an object mask, an attenuation mask and a refractive flow field) in a fast feed-forward pass. As no off-the-shelf dataset is available for transparent object matting, we create a large-scale synthetic dataset consisting of 158K images of transparent objects rendered in front of images sampled from the Microsoft COCO dataset. We also collect a real dataset consisting of 876 samples using 14 transparent objects and 60 background images. Promising experimental results have been achieved on both synthetic and real data, which clearly demonstrate the effectiveness of our approach.

Publications

[1] Guanying Chen*, Kai Han*, Kwan-Yee K. Wong
Learning Transparent Object Matting
International Journal of Computer Vision (IJCV), 2019. (* indicates equal contribution.)

[2] Guanying Chen*, Kai Han*, Kwan-Yee K. Wong
TOM-Net: Learning Transparent Object Matting from a Single Image
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. (* indicates equal contribution.)
Spotlight presentation

Further information

Papers and code are available here.

Mirror Surface Reconstruction

This project addresses the problem of mirror surface reconstruction, and proposes a solution based on observing the reflections of a moving reference plane on the mirror surface. Unlike previous approaches which require tedious calibration, our method can recover the camera intrinsics, the poses of the reference plane, as well as the mirror surface from the observed reflections of the reference plane under at least three unknown distinct poses. We first show that the 3D poses of the reference plane can be estimated from the reflection correspondences established between the images and the reference plane. We then form a bunch of 3D lines from the reflection correspondences, and derive an analytical solution to recover the line projection matrix. We transform the line projection matrix to its equivalent camera projection matrix, and propose a cross-ratio based formulation to optimize the camera projection matrix by minimizing reprojection errors. The mirror surface is then reconstructed based on the optimized cross-ratio constraint. Experimental results on both synthetic and real data are presented, which demonstrate the feasibility and accuracy of our method.

Publications

[1] Kai Han, Miaomiao Liu, Dirk Schnieders, Kwan-Yee K. Wong
Fixed Viewpoint Mirror Surface Reconstruction under an Uncalibrated Camera
IEEE Transactions on Image Processing (TIP), 2021.

[2] Kai Han, Kwan-Yee K. Wong, Dirk Schnieders, Miaomiao Liu
Mirror Surface Reconstruction Under an Uncalibrated Camera
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

Further information

Papers and code available here.