MCBPnet:一种高效的轻量级青杏识别模型

    MCBPnet as an efficient and lightweight recognition model for green apricot fruits

    • 摘要: 为解决青杏识别易受田间复杂环境、设备计算资源等限制的问题,该研究提出一种MCBPnet轻量化青杏识别模型。该研究使用MobileNetV3轻量化网络结构代替YOLOv8的主干特征提取网络,降低了模型的复杂程度;在MobileNetV3网络中融入CBAM(convolutional block attention module),在颈部网络引入了BiFPN(bi-directional feature pyramid network)结构,提高模型对青杏图像的特征提取和融合的能力;检测头部分采用了PConv(partial convolution)结构,以提高模型的鲁棒性和检测精度。将MCBPnet模型应用于青杏检测试验,结果表明,MCBPnet模型的检测速度为109.890帧/s,与YOLOv8n模型相比提高了70.33%,模型运算量为6.1 G,为YOLOv8n模型的75.31%,并且精确度(precision, P)和平均精度值(mean average precision, mAP50)达到了0.988和0.994,模型具有较高的检测精度,同时实现了模型的轻量化。MCBPnet模型实现了对青杏果实的高效、精确的实时检测,为青杏的自动化识别和采摘提供了技术支持。

       

      Abstract: Intelligent and automation technologies have been widely applied to agricultural production in recent years. A green apricot is one of the most significant economic fruits in Asian areas. However, traditional identification cannot fully meet large-scale production under complex environmental conditions in precision management. Fruit recognition and detection technologies can be required to enhance the accuracy and efficiency of large-scale production at present. In this study, an efficient and lightweight target recognition model (MCBPnet) was developed to automatically detect the greet apricot fruits using advanced deep learning. The CBAM (convolutional block attention module) was first integrated into the MobileNetV3 architecture. The precision of the model was significantly enhanced to identify the apricot fruits with the most rich image areas. The regions of an image were then prioritized with the most relevant to fruit detection. More computational resources were effectively allocated to the features closely associated with the task. As a result, the model structure, IRCBAM (inverted residual convolutional block attention module) was then developed to incorporate as the backbone network of MCBPnet. BiFPN (Bi-directional feature pyramid network) was also introduced into the neck network, in order to further enhance the performance of the model. A feature pyramid was constructed to capture the multi-scale features from images using the BiFPN model. The fruits with varying sizes were then detected within a single frame. A high detection rate was also maintained in a wide range of fruit sizes and orientations. Moreover, some occlusions were also removed during detection, where the fruits were partially obscured by leaves or other fruits. The PConv (partial convolution) structure was utilized in the detection head. The high accuracy of detection was achieved, even when only a portion of a fruit was visible. As such, reliable recognition was obtained to increase its accuracy and robustness under conditions of partial visibility. The results demonstrated that the MCBPnet model achieved significant advantages across key performance metrics, including precision and mAP50. Specifically, the high accuracy of detection was obtained with a precision of 0.988, an mAP50 of 0.994. Furthermore, the detection speed of the MCBPnet model was 109.890 frames/s, which was 70.33% higher than that of the YOLOv8n model, and the model arithmetic was 6.1 G, which was 75.31% of that of the original one. This lightweight model was also deployed on edge devices, such as drones or handheld scanners, without any energy-intensive computational resources. In summary, the high-performance metrics were achieved by the MCBPnet model. The effectiveness of the integrated techniques was validated to maintain high accuracy during agricultural automation. Various indicators represented the significant advancement in the MCBPnet model. Advanced deep learning techniques and lightweight can be expected to improve the efficiency and accuracy of apricot detection during harvesting. The findings can provide technological support to the automated recognition and picking of green apricots. This MCBPnet model can greatly contribute to modern, intelligent, and sustainable crop management, thus driving the transition toward more efficient and intelligent agricultural practices.

       

    /

    返回文章
    返回