Xue Yueju, Yang Xiaofan, Zheng Chan, Chen Changxin, Gan Haiming, Li Shimei. Lactating sow high-dangerous body movement recognition from depth videos based on hidden Markov model[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(13): 184-190. DOI: 10.11975/j.issn.1002-6819.2019.13.021
    Citation: Xue Yueju, Yang Xiaofan, Zheng Chan, Chen Changxin, Gan Haiming, Li Shimei. Lactating sow high-dangerous body movement recognition from depth videos based on hidden Markov model[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(13): 184-190. DOI: 10.11975/j.issn.1002-6819.2019.13.021

    Lactating sow high-dangerous body movement recognition from depth videos based on hidden Markov model

    • Abstract: The high-dangerous body movements of lactating sows are closely related to the survival rate of piglets, which can directly reflect their maternal behavioral ability, and these movements are closely related to the frequency and duration of posture changing. Under the scene of commercial piggery, the influence of illumination variations, heat lamps, sow and piglet adhesion, body deformation, etc., brings great difficulties and challenges to the automatic identification of the posture changes of lactating sows. This study took the Small-ears Spotted sows raised in the Lejiazhuang farm in Sanshui District, Foshan City, Guangdong Province as the research object, and used the depth video images collected by Kinect2.0 as the data source. We proposed a localization and recognition algorithm of sow posture changes based on Faster R-CNN and HMM(hidden Markov model )models from the depth videos. Our algorithm consists of five steps: 1) a 5×5 median filter is used to denoise the video images, and then the contrast of images are improved by contrast limited adaptive histogram equalization; 2) an improved Faster R-CNN model is used to detect the most probable posture in each frame to form a posture sequence, and the first five detection boxes with the five highest probabilities are reserved as candidate regions for the action tube generation in the third step; 3) a sliding window with a length of 20 frames and a step size of 1 frame is used to detect the suspected change segments in the video, and then the maximum score of action tube of each suspected change segment is construct by the Viterbi algorithm; 4) each frame is segmented by Otsu and morphological processing in the suspected change segments, and the heights of the sow trunk, tail and both sides of body are calculated to form height sequences; 5) the height sequence of each suspected change segment is fed into the HMM model and then classified as posture change or non-change, and finally the posture change segments are classified according to the classes of before and after segments belonging to single posture segments. According to the threat degree of sow accident and the frequency of sow behavior, the posture changes of sows were divided into four categories: descending motion 1, descending motion 2, ascending motion and rolling motion. The data set included 240 video segments covering different sow sizes, postures and posture changes. The 120 video segments were chosen as the testing set, and the rest of the video segments were used as training set and validation set for Faster R-CNN and HMM. Our Faster R-CNN model was trained by using Caffe deep learning framework on an NVIDIA GTX 980Ti GPU (graphics processing unit), and the algorithm was developed on Matlab 2014b platform. The experimental results showed that the Faster R-CNN model had high recognition accuracy, and the detection time of each frame was 0.058 seconds, so the model could be practicably used in a real-time detective vision system. For sows with different body colors and sizes, the Faster R-CNN model had good generalization ability. The HMM model could effectively identify the posture change segments from the suspected change segments with an accuracy of 96.77%. The accuracy of posture change identification was 93.67%, and the recall rate of the 4 classes of posture change i.e. descending movements 1, descending movements 2, ascending movements and rolling were 90%, 84.21%, 90.77%, 86.36%. The success plot was 97.40% when the threshold was 0.7, which showed that the optimized position tube had a good tracking effect for sows in the scene of commercial piggery, effectively overcoming the influence of heat lamps and body deformation of sows. Our method could provide a technical reference for 24-hour automatic recognition of lactating sow posture changes and make a foundation for the following research on sow high-dangerous body movement recognition and welfare evaluation.
    • loading

    Catalog

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return