李修华, 魏鹏, 何嘉西, 李民赞, 张木清, 温标堂. 基于Kinect V3深度传感器的田间植株点云配准方法[J]. 农业工程学报, 2021, 37(21): 45-52. DOI: 10.11975/j.issn.1002-6819.2021.21.006
    引用本文: 李修华, 魏鹏, 何嘉西, 李民赞, 张木清, 温标堂. 基于Kinect V3深度传感器的田间植株点云配准方法[J]. 农业工程学报, 2021, 37(21): 45-52. DOI: 10.11975/j.issn.1002-6819.2021.21.006
    Li Xiuhua, Wei Peng, He Jiaxi, Li Minzan, Zhang Muqing, Wen Biaotang. Field plant point cloud registration method based on Kinect V3 depth sensors[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2021, 37(21): 45-52. DOI: 10.11975/j.issn.1002-6819.2021.21.006
    Citation: Li Xiuhua, Wei Peng, He Jiaxi, Li Minzan, Zhang Muqing, Wen Biaotang. Field plant point cloud registration method based on Kinect V3 depth sensors[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2021, 37(21): 45-52. DOI: 10.11975/j.issn.1002-6819.2021.21.006

    基于Kinect V3深度传感器的田间植株点云配准方法

    Field plant point cloud registration method based on Kinect V3 depth sensors

    • 摘要: 准确建立植物的三维点云是以点云方式高通量获取植株各部位物理参数的前提。为实现田间复杂环境下的植株三维点云配准,该研究提出了一种基于多标定球的田间植株点云自动配准方法,并分别在室内简单场景及大田复杂场景下从不同角度对多种作物采集的点云数据进行验证。该方法采用随机抽样一致性算法(Random Sample Consensus, RANSAC)结合点云减法的概念从下采样后的点云中实现多标定球的自动提取,弥补了RANSAC一次只能提取单个物体的缺点。然后基于各标定球的球心距离信息实现三维点集的自动匹配。最后使用奇异值分解算法解算旋转平移矩阵,实现点云的自动配准。不同场景下各作物的配准结果表明,各植株的水平90°、180°、270°以及垂直方向上的点云配准到水平0°点云下的平均轴向误差在5.8~17.4 mm之间,平均点位误差在13.1~28.9 mm之间,与手动配准的商用同类软件LiDAR360的配准结果相当,但配准过程的自动化程度明显提高,效率提高了67%。该文所提出的方法可在田间复杂环境下对低成本深度相机获取的植株点云实现高精度的自动配准,为田间植物表型参数的提取提供了低成本的可行方案。

       

      Abstract: Abstract: An accurate modeling of three-dimensional (3D) point cloud has been a crucial step to extract the physical phenotyping parameters of plants, such as the plant height, leaf quantity, and leaf area. In this study, an automatic registration of point clouds with multiple calibration balls was proposed to realize the 3D modeling of plants under a complex background in the field. A low-cost depth sensor (such as Kinect V3) was also selected to capture the images. The performance of the registration was then evaluated using the multiple angles of several plants under the indoor and in-field scenes. Four procedures were included the point cloud filtering and down sampling, multiple calibration balls extraction, correspondence points matching, as well as the calculation and registration of transformation matrix. 1) The pass-through filtering with only several boundary thresholds was used to reduce the noises, while the bounding box compression was used to down sampling the point clouds. 2) A Random Sampling Consistency (RANSAC) was also used in the multiple calibration balls extraction. Furthermore, a concept of point cloud subtraction was proposed to combine with the RANSAC, in order to form an automatic extraction of multiple calibration balls. Among them, the RANSAC was utilized to extract one single object at one time. As such, the center coordinates of each calibration ball were then calculated as the featured points. 3) The distances of any two feature points were calculated in the correspondence points matching. An automatic matching of correspondence points using the distance information was also proposed to realize the self-matching of the calibration balls. 4) The singular value decomposition was adopted to solve the transformation (rotation and translation) matrix, and then the registration of two pieces of point clouds was realized eventually. The experiments were carried out in two scenes: one was the indoor scene with the flat and clean surface, and another was the field scene with the uneven and complex conditions. Five plants were chosen as the research objects, including one sugarcane (indoor), one sorghum (indoor), and three young banana plants (in-field). The Kinect V3 sensor was utilized to capture the point clouds of each object in five orientations (the horizontal orientation of 0?, 90?, 180?, 270?, and one vertical orientation). A commercial point cloud processing software named LiDAR360 was adopted for comparison with the man-aid registration. After that, the object point clouds (horizontal 90°, 180°, 270°, and vertical) of each plant was used to implement the registration into the coordinate system of the source point cloud (horizontal 0°). The registration performance was evaluated to calculate the axial errors and point errors of the transformed coordinates, as well as the source coordinates of the centers of all the calibration balls. The results showed that the registration of different point clouds was successfully implemented in the different orientations, no matter in the indoor or in-field environment. Particularly, the generated 3D model of the plant was clear in shape. Specifically, there were an average axial error of 5.8-17.4 mm and an average point position error of 13.1-28.9 mm for the registration in the different scenes, similar to the registration in the manual registration software LiDAR360. In the required calculation time, the automatic matching took about 50 s on average for the process from the extraction of calibration sphere to the registration, whereas, the LiDAR360 took about 150 s with the manual selection of the correspondence points, indicating the efficiency increased by 67%. Consequently, it can be widely expected to serve a high-precision and automatic registration of plant point clouds acquired by low-cost depth cameras in complex field environments. The finding can provide a low-cost feasible solution for the 3D modeling and the extraction of phenotyping parameters of field plants.

       

    /

    返回文章
    返回