张士豪,沈磊,宋利杰,等. 基于RGB-D图像的葡萄复芽识别定位方法[J]. 农业工程学报,2023,39(21):172-180. DOI: 10.11975/j.issn.1002-6819.202307014
    引用本文: 张士豪,沈磊,宋利杰,等. 基于RGB-D图像的葡萄复芽识别定位方法[J]. 农业工程学报,2023,39(21):172-180. DOI: 10.11975/j.issn.1002-6819.202307014
    ZHANG Shihao, SHEN Lei, SONG Lijie, et al. Identifying and positioning grape compound buds using RGB-D images[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2023, 39(21): 172-180. DOI: 10.11975/j.issn.1002-6819.202307014
    Citation: ZHANG Shihao, SHEN Lei, SONG Lijie, et al. Identifying and positioning grape compound buds using RGB-D images[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2023, 39(21): 172-180. DOI: 10.11975/j.issn.1002-6819.202307014

    基于RGB-D图像的葡萄复芽识别定位方法

    Identifying and positioning grape compound buds using RGB-D images

    • 摘要: 为解决复杂背景下复芽精确识别与定位的问题,该研究提出一种基于远近组合视觉法的并生芽与副芽视觉识别方法,通过在视觉映射平面应用欧几里得距离算法,于远景实现并生芽的检测,在近景并生芽映射区域内实现副芽的识别,提高了田间小目标物体的识别准确率。对60个复芽样本的测试结果表明,该方法的总体平均置信度为0.905,平均检测时间为18.1 ms;其次,提出一种田间复芽的在线检测定位法,通过对同一复芽进行连续5帧在线检测与定位,实时删除和更新初始帧与最新帧图像数据,直到连续5帧定位坐标相对误差小于误差阈值,取得复芽的平均定位坐标,提高自然背景下复芽的定位精度与复杂光环境下的抗干扰能力,测试结果表明,该方法对并生芽与副芽的定位精度为±0.916和±0.654 mm,其重复性和精度满足副芽抹除需求,整个系统可靠有效,能够为抹芽机器人在线定位并生芽和副芽以进行抹芽作业提供高质量的技术支持,为果园自动化抹芽的实现奠定了基础。

       

      Abstract: Efficient and accurate identification and position of compound buds can greatly contribute to the orchard robots for automatic bud removal operations, thereby improving the efficiency of flower and fruit thinning. This study aims to accurately identify and position the small objects in complex backgrounds for grape bud removal in the field. (1) A visual recognition system was proposed for the concurrent bud and secondary bud using a combination of far and near visual. Euclidean distance was applied on the visual mapping plane. The concurrent bud was then identified in the far view, whereas, the secondary bud was identified in the near view within the concurrent bud mapping area, in order to improve the recognition accuracy of small target objects in the field. The original images (2132 images) were randomly divided into the training set (1735 images) and validation set (397 images) in the ratio of 8:2. The training set and validation set were uniformly cropped to 2944×1656 pixel size, and then scaled to 1280×720 pixel size with 2.3 times of equal proportion. Data augmentation was carried out by three random combinations of augmentation through scaling, flipping, and color gamut transformation for the concurrent and secondary bud datasets. Once the augmentation scale was too large for the target features, the images were manually excluded from the dataset. As such, the training set was enlarged to 15840 images. YOLOv5m and YOLOv5s were selected as the network models for juxtaposed bud and parabudding detection. The sizes of the trained models were 42.1 and 14.4 MB, respectively, with AP of 0.702 and 0.773, F1 scores of 0.685 and 0.765 on the test set of concurrent and secondary bud images, respectively, where the average inference time per image was 10.62 and 7.01 ms. 60 compound bud images showed that the overall average accuracy of this improved model was 0.905, and the average detection time was 18.1 ms for each process; (2) An online detection and position were then proposed for the compound bud in the field. The online detection and position were introduced in the prediction stage to improve the light-resistant ability of the depth camera when positioning in a complex environment, in order to reduce the fluctuation and error of the depth information. The same compound bud was online detected and positioned for 5 consecutive frames. The initial frame and the latest frame image data were deleted and updated in real time, until the relative error of positioning coordinates in 5 consecutive frames was less than the error threshold, in order to obtain the average positioning coordinates of the compound bud. Better performance was achieved in the positioning accuracy of compound bud in natural background and the resistance to light coupling in complex light environment. The test results showed that the positioning accuracy of the improved model was ±0.916 and ±0.654 mm for the concurrent and secondary bud, respectively. The repeatability and accuracy can fully meet the demand for the secondary bud wiping, indicating a reliable and effective system. The finding can provide high-quality technical support for the online positioning of the concurrent and secondary buds for sprout wiping operation in sprouting robots, in order to realize the automatic sprout wiping in orchards.

       

    /

    返回文章
    返回