摘要
Multi-modal data fusion of LiDAR (Laser Imaging, Detection, and Ranging) and binocular camera is important in the research on 3D reconstruction. The two sensors have their own advantages and disadvantages, and they can complement each other through data fusion to obtain better reconstruction results. In order to achieve data fusion, firstly it is necessary to unify the two data into the same coordinate system. The calibration results of the external parameters between the LiDAR and the camera are very important to 3D reconstruction. Due to sparse LiDAR point cloud and its positioning error, it is a challenge to extract feature points accurately for constructing accurate point correspondences when calibrating extrinsic parameters between LiDAR and stereo camera. In addition, most calibration methods ignore that LiDAR works on spherical coordinate system and directly use the Cartesian coordinate measurement results for calibration, which introduces anisotropic coordinates error and reduces the calibration accuracy. This paper proposed a calibration method by minimizing isotropic spherical coordinate error. Firstly, a novel calibration object using centroid feature points was proposed to improve the extraction accuracy of feature points. Secondly, the anisotropic LiDAR Cartesian coordinate error were convert into the isotropic spherical coordinate error, and the extrinsic parameters were solved through directly minimizing the spherical coordinate error. The experiments show that the proposed method has advantages over the anisotropic weighting method. The method ensures that the solution is globally optimal and the number of calibration samples required is greatly reduced on the premise of sacrificing some accuracy. With the optimal calibration error of 2. 75 mm, the amount of calibration data can be reduced by about 54. 5% by sacrificing 3. 6% accuracy using the proposed method. ? 2023 South China University of Technology.
- 单位