DensePoint: Learning Densely Contextual Representation for Efficient Point Cloud Processing

DensePoint: Learning Densely Contextual Representation for Efficient Point Cloud Processing

Abstract

Point cloud processing is very challenging, as the diverse shapes formed by irregular points are often indistinguishable at first glance. A thorough grasp of the elusive shape requires sufficiently contextual semantic information, yet few works devote to this. Here we propose DensePoint, a general architecture to learn densely contextual representation for point cloud processing. Technically, it extends regular grid CNN to irregular point configuration by generalizing a convolution operator, which holds the permutation invariance of points, and achieves efficient inductive learning of local patterns. Architecturally, it finds inspiration from dense connection mode, to repeatedly aggregate multi-level and multi-scale semantics in a deep hierarchy. As a result, densely contextual information along with rich semantics, can be acquired by DensePoint in an organic manner, making it highly effective. Extensive experiments on challenging benchmarks across four tasks, as well as thorough model analysis, verify that DensePoint achieves the state of the arts.

Publication
In 2019 IEEE International Conference on Computer Vision (ICCV)