摘要

3D point cloud has received great attention due to the fact that they are less affected by natural weather conditions such as fog, rain and snow, and it is widely used in a variety of fields such as transportation, energy and healthcare. Point cloud classification aims to classify the categories of 3D point cloud data to provide information to decision makers in different fields and to enable the development of solutions, so it’s significant for automated driving, fault diagnosis and medical image analysis. The application of point cloud classification is promising, but it still faces many challenges. Due to the characteristics of point cloud such as disorder, sparseness and finiteness, traditional image processing and computer vision methods can not be directly applied to point cloud data analysis. The direct use of convolutional neural network can not effectively extract point cloud features; the feature extraction in some models is insufficient, and the local and global information is not effectively utilized, which may lead to the loss of important feature information. Aiming at these problems, a multi-feature fusion module combining local and global features of point cloud was proposed, and combined with the offset attention mechanism, the multi-feature fusion module was embedded to realize the extraction of deeper point cloud features. At the same time, the residual structure was introduced to make full use of the shallow extracted features to prevent the loss of shallow features caused by the overdepth of network. Training and testing were performed on ModelNet40 and ScanObjectNN classification datasets, and ablation studies and partial data visualization of the experiments were performed. The experimental results show that the overall classification accuracy of this model on ModelNet40 is 93. 6%, which improves the overall classification accuracy by 4. 4, 0. 7 and 0. 4 percentage points compared with PointNet, LDGCNN and PCT models, respectively. The overall accuracy of classification on ScanObjectNN is 83. 7%, which is 5. 8 and 5. 6 percentage points higher than that of PointNet++ and DGCNN, respectively, with higher accuracy and robustness. ? 2024 South China University of Technology.

全文