Preventing Self-Intersection with Cycle Regularization in Neural Networks for Mesh Reconstruction from a Single RGB Image

                                                                    Siyu Hu                      Xuejin Chen*

University of Science and Technology of China

Computer Aided Geometric Design 2019


Figure 1: The cycle regularization implemented along with two networks. (a) is the implementation with AtlasNet Groueix et al. (2018). f is the forward 3D surface decoder in the original network and g is our inverse decoder used to form the regularization term. (b) is the implementation with Pixel2Mesh Wang et al. (2018). Pixel2Mesh Wang et al. (2018) adopts the coarse-to-fine framework and uses three G-ResNet blocks (f1 , f2 , f3) to map the mesh to target shape on three different point density. The graph-unpooling layers are used for mesh upsampling. We use three point-wise MLP (g1 , g2 , g3) as the inverse decoders for each level of point density and form a regularization term for each level.



Self-intersection in surfaces is a typical defect that makes a 3D model unsuitable for many applications. Existing neural networks for 3D surface mesh reconstruction are faced with the challenge of integrating self-intersection prevention. In this paper, we propose a trainable cycle regularization in mesh reconstruction networks to prevent self-intersection. It is a general technique that can be easily implemented with existing surface mesh generation networks. Our experiments on two latest mesh reconstruction networks demonstrate that with the proposed cycle regularization, self-intersections in the generated meshes are significantly reduced, while the shape similarity is comparable with the original networks under the Chamfer distance metric.



Figure 2: Cycle regularization on AtlasNet. All the visualized cases here are selected from the test set of AtlasNet. We manually adjust the view direction for some meshes to better expose the differences. The red rectangles highlight a case where more details are preserved than the original network because injectivity is enforced with our cycle regularization.
Figure 3: Cycle regularization on Pixel2Mesh. These examples are selected from the test set of Pixel2Mesh. We adjust the view direction for some meshes to better expose the differences.
Figure 4: Visualization of the effect of our cycle regularization in the coarse-to-fine framework of Pixel2Mesh. The green rectangle shows a close-up view of a subtle self-intersection in the airplane model.

We would like to thank Dr. Xin Tong and Hao Su for their insightful comments and suggestions. This work was supported by the National Key Research and Development Plan of China under Grant No. 2016YFB1001402, the National Natural Science Foundation under Grant No. 61632006, and the Fundamental Research Funds for the Central Universities under Grant WK3490000003.

Main References:

[1] Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.G., 2018. Pixel2mesh: generating 3D mesh models from single RGB images. In: The European Conference on Computer Vision. ECCV.

[2] Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M., 2018. A papier-mâché approach to learning 3D surface generation. In: The IEEE Conference on Computer Vision and Pattern Recognition. CVPR.

title = "Preventing self-intersection with cycle regularization in neural networks for mesh reconstruction from a single RGB image",
journal = "Computer Aided Geometric Design",
volume = "72",
pages = "84 - 97",
year = "2019",
issn = "0167-8396",
doi = "",
url = "",
author = "Siyu Hu and Xuejin Chen"
Disclaimer: The paper listed on this page is copyright-protected. By clicking on the paper link below, you confirm that you or your institution have the right to access the corresponding pdf file.

Copyright © 2018 GCL , USTC