ZHANG Mengqi, HAN Qunyong, CHAI Qinqin, et al. A detection method for phalaenopsis seedling bed potted plants based on YOLOv11n-DEE[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2026, 43(1): 1-9. DOI: 10.11975/j.issn.1002-6819.202505195
    Citation: ZHANG Mengqi, HAN Qunyong, CHAI Qinqin, et al. A detection method for phalaenopsis seedling bed potted plants based on YOLOv11n-DEE[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2026, 43(1): 1-9. DOI: 10.11975/j.issn.1002-6819.202505195

    A detection method for phalaenopsis seedling bed potted plants based on YOLOv11n-DEE

    • In large-scale commercial phalaenopsis cultivation management, seedling count statistics during the seedling stage serve as a core link throughout growth monitoring, resource allocation, and yield prediction. However, traditional manual counting is constrained by issues such as subjective human errors and high labor intensity. Automated counting solutions, on the other hand, face practical challenges including dense seedling arrangement, mutual leaf occlusion, and irregular growth forms. These factors have led to an increasingly prominent bottleneck of insufficient counting accuracy and low efficiency, seriously hindering the intelligent and refined development of cultivation management. To address the above problems, based on the YOLOv11n, a precise identification method for detecting the number of potted seedlings on Phalaenopsis seedbeds is proposed. This method integrates machine vision and YOLOv11n-DEE, specifically targeting the large counting error of existing target recognition and segmentation methods when seedlings are dense or occluded. This study implements targeted upgrades from three dimensions including network structure optimization, detection head design, and loss function improvement. Firstly, a multi-branch module is introduced into the C3k2 module of the backbone and neck networks to construct the C3k2_DBB module. This module breaks through the feature capture limitations of traditional single convolution kernels via a multi-path feature extraction mechanism. It adopts a parallel branch structure to fuse features of different scales, thereby enhancing the ability of the backbone and neck networks to detect objects of varying scales. Secondly, the detection head in YOLOv11n is replaced with an efficient decoupled detection head, which incorporates group convolution. The efficient decoupled detection head using group convolution to divide input channels into multiple groups for separate convolution operations, so as to reduce computational load and parameter count during training while ensuring the integrity of feature extraction, and better balancing detection efficiency and accuracy. Thirdly, considering that the constructed dataset contains far more unoccluded seedlings than occluded ones, resulting in a serious sample distribution imbalance which will lead to insufficient learn of feature information for occluded seedlings. Therefore, a loss function with an exponential moving average mechanism is used to optimize the detection model training process. By dynamically adjusting the weight distribution of different samples, the learning of features for the minority class (occluded seedlings) is strengthened. Thus, training bias caused by sample imbalance is effectively avoided. Finally, a real dataset is constructed from images of Phalaenopsis seedbed seedlings captured by cameras. Experimental results on the dataset show that the proposed YOLOv11n-DEE model achieves an accuracy of 84.6% and an average precision of 86.6%. Compared with the YOLOv11n baseline model, the counting accuracy and mAP@0.5 for dense overlapping seedlings are improved by 5.0% and 4.0% respectively. Particularly for occluded seedlings, the mAP@0.5 is increased by 7.4%. The number of parameters is reduced by 11.0%, and the detection speed is improved by 2.34 frames per second. Compared with DETR, RT-DETR-resnet50, Faster-RCNN, SSD, YOLOv8n, and YOLOv10n, the proposed YOLOv11n-DEE outperforms all these algorithms in precision indicators, while the number of parameters is reduced by 83.0%, 92.87%, 89.8%, 84.2%, 23.3%, and 14.4% respectively, and the computational load is reduced by 46.2, 98.3, 97.5, 60.6, 3.0, and 3.1 Gigaflops. Regarding detection performance under different lighting conditions, experiments are conducted on images collected from 9 AM to 4 PM (divided into 6 time periods). The YOLOv11n-DEE algorithm outperforms YOLOv11n in all time periods, and both models achieve the highest indicators between 3 PM and 4 PM. Visualization results indicate that both YOLOv11n and YOLOv11n-DEE can identify unoccluded seedlings, but YOLOv11n-DEE has higher confidence. Moreover, YOLOv11n-DEE can effectively identify occluded seedlings and misidentified seedlings that YOLOv11n fails to detect. Therefore, the improved model proposed in this study can provide technical support for the precise identification of potted seedlings on phalaenopsis seedbeds.
    • loading

    Catalog

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return