邓寒冰, 许童羽, 周云成, 苗腾, 张聿博, 徐静, 金莉, 陈春玲. 基于DRGB的运动中肉牛形体部位识别[J]. 农业工程学报, 2018, 34(5): 166-175. DOI: 10.11975/j.issn.1002-6819.2018.05.022
    引用本文: 邓寒冰, 许童羽, 周云成, 苗腾, 张聿博, 徐静, 金莉, 陈春玲. 基于DRGB的运动中肉牛形体部位识别[J]. 农业工程学报, 2018, 34(5): 166-175. DOI: 10.11975/j.issn.1002-6819.2018.05.022
    Deng Hanbing, Xu Tongyu, Zhou Yuncheng, Miao Teng, Zhang Yubo, Xu Jing, Jin Li, Chen Chunling. Body shape parts recognition of moving cattle based on DRGB[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(5): 166-175. DOI: 10.11975/j.issn.1002-6819.2018.05.022
    Citation: Deng Hanbing, Xu Tongyu, Zhou Yuncheng, Miao Teng, Zhang Yubo, Xu Jing, Jin Li, Chen Chunling. Body shape parts recognition of moving cattle based on DRGB[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(5): 166-175. DOI: 10.11975/j.issn.1002-6819.2018.05.022

    基于DRGB的运动中肉牛形体部位识别

    Body shape parts recognition of moving cattle based on DRGB

    • 摘要: 如何解决运动中肉牛关键部位自动识别,是实现肉牛异常行为早期发现的关键。该文通过Kinect采集肉牛图像的2种模态(Depth和RGB):基于RGB模态提出随机最近邻像素比较法,实现肉牛动作样本的自动抓取;基于Depth模态提出深度均值法,实现彩色图像背景过滤并保留肉牛形体信息,生成DRGB图像样本;基于Fast R-CNN设计识别器,参考AlexNet设计了8种分类网络并比较网络分类精度,选择最优网络作为识别器的基础网络;输入DRGB样本对网络的识别部分二次训练,最终得到符合精度要求的识别器。试验证明,RNNPC的有效数据率为94%;SelectiveSearch算法在DRGB上产生的候选区域数量减少90%;识别网络的平均分类精度可以达到75.88%,处理图像速率为4.32帧/s,效果优于原Fast RCNN,基本可以实现运动中肉牛形体部位识别。

       

      Abstract: Abstract: Body shape parts acquisition and recognition from moving cattle are difficult to realize automatically especially with the similar color and texture frames or images. Usually, the cattle's abnormal behavior is hardly detected by human because abnormal behavior detection needs continuous observation, but we can solve this problem with our method. In this paper, we use the Microsoft Company's smart vision sensor (named Kinect) to collect the information of 2 modals from cattle movement (the depth information and RGB information, DRGB). The depth information can be acquired by the infrared sensor and the depth value means the distance between the object and the sensor. The RGB information can be acquired by the normal camera. Based on the color modal information, we propose a randomized nearest neighbor pixel comparison (RNNPC) method to snatch at continuous action frames without static frames. By comparing the distinction of pixels between 2 adjacent frames, we can judge whether there is obvious or micro actions appearing in these 2 neighboring images. And if the distinction value is more than the threshold, the camera would record the cattle's continuous movements or actions automatically. Based on the depth modal information, we can calculate the mean value of depth which is obtained in the dynamic region by the RNNPC method. And with the mean value of depth, we can filter and transform the invalid pixels into dark pixels in the continuous frames. Meanwhile, the intact shape of the cattle in the original picture is preserved by our method (save the pixels with no change). From the results of filtration, we can see that the original image background area is reduced, and the method (we use SelectiveSearch in this paper) also directly reduces the number of candidate regions compared with the conventional recognition algorithm. According to the training samples scale, the network performance and the characteristic of the key parts of cattle from continuous frames, we adjust the parameters of the deep convolutional neural network, and we set the iterating upper limit and finish the network training when the number of iterating reaches this limit. Finally, with the trained network, we can realize the recognition of key parts from moving cattle in the processed continuous frames. The experimental results show that RNNPC method can save 72% of storage space, and the ratio of valid data for the other 38% of continuous frames can reach 94%. By filtering the invalid pixels, the invalid background information or objects of the original images can be almost removed, and the number of candidate objects generated by the conventional object recognition algorithm can be reduced by an order of magnitude. Compared to the original image, 90% candidate regions are reduced with SelectiveSearch algorithm on DRGB image. By adjusting network parameters, we can improve the convergence speed of deep convolutional neural network in training the samples with similar color and texture, and the single training iteration time does not significantly increase. The average classification accuracy of the net can reach 75.88%, and the image processing rate is 4.32 FPS, and under the same condition, the effect is better than that by the original Fast RCNN. By using the method mentioned above, we can realize body shape parts acquisition and recognition from moving cattle.

       

    /

    返回文章
    返回