摘要

Point cloud completion aims to estimate the whole shapes of objects from their partial scans, and one of the main obstacles that prevents current methods from being applied in real-world scenarios is the variety of structural losses in real-scanned objects, which can hardly be fully included and reflected by the training samples. In this paper, we introduce Patch Sequence Autoregression Network (PSA-Net), a learning-based method that can be trained without the partial point clouds in dataset and is inherently adaptable to input scans with different levels of shape incompleteness: It makes restoring the unseen parts of objects be equivalent to predicting the missing tokens in local patch embedding sequences, and such prediction can start from any initial states. Specifically, we first introduce a Sequential Patch AutoEncoder that reconstructs complete point clouds from quantized patch feature sequences. Second, we establish a Mixed Patch Autoregression pipeline that can flexibly infer the whole sequence from any number of known tokens at any positions. Third, we propose a Voting-Based Mapping module that makes input points softly vote for their possible related tokens in sequences based on their local areas, which transforms partial point clouds to masked sequences in test. Quantitative and qualitative evaluations on two synthetic and four real-world datasets illustrate the competitive performances of our network when comparing with existing approaches.

全文