<RECORD 1>
Accession number:20213210735500
Title:Prediction of soybean yield by using RGB model with skew distribution pattern of canopy leaf color
Title of translation:用冠层叶色偏态分布模式RGB模型预测大豆产量
Authors:Zhang, Pei (1, 2); Chen, Zhengmeng (3); Ma, Shundeng (1); Yin, Di (1); Jiang, Haidong (1)
Author affiliation:(1) Key Laboratory of Crop Physiology and Ecology in Southern China, Ministry of Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing; 210095, China; (2) Jiangsu Meteorological Bureau, Nanjing; 210008, China; (3) Longyan Company of Fujian Provincial Tobacco Corporation, Longyan; 364000, China
Corresponding author:Jiang, Haidong(hdjiang@njau.edu.com)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:120-126
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">With the increasing maturity of digital imaging technology and the increasing popularity of high resolution camera equipment, the advantages of high resolution and low cost have prompted the use of digital imaging technology to conduct more qualitative and quantitative descriptions of phenotypic traits for plant appearance. The RGB model is the most commonly used color representation for digital images. In order to explore the feasibility of using color gradation distribution parameters of the RGB model in soybean yield prediction, and to verify the universality of the method in different fertilizer operations and varieties, two soybean varieties, Qujing and Xudou 18, were selected to design field fissure experiments with different densities and nitrogen fertilizer levels in this study. Digital cameras were carried by Unmanned Aerial Vehicle (UAV) to collect soybean canopy digital images during three important reproductive growth stages. The results showed that the cumulative distribution of canopy color gradation of soybean at the florescence, pod-setting and grain-filling stages, all conformed to the skewed distribution, and a total of 20 Color Gradation Skewness-Distribution (CGSD) parameters were obtained by skew analysis. These parameters simultaneously described the shade and distribution of the canopy leaf color. The 20 CGSD parameters were significantly different among the florescence, pod-setting and grain-filling stages. And the variation trend of color depth parameters (mean, median, and mode) was opposite to that of the distribution parameters (skewness and kurtosis). The prediction model of soybean yield by using prediction model multiple stepwise regression method was constructed based on CGSD parameters with P value of 0.012. The model had high estimation accuracy in both the modeling group and the verification groups. The prediction accuracy of the model in modeling group reached 91.30% on average; the average prediction accuracy of 18 plots in the nitrogen operation research validation group was 87.33%. Although the prediction accuracy of the validation group for different varieties was lower than that of the modeling group and the validation group for nitrogen fertilizer operation research, it was also close to 80%. In conclusion, the RGB color model based on skewness distribution provided detailed soybean canopy image information, and the canopy color information quantitatively described systematically from the degree of depth, distribution bias and uniformity. And thus the yield prediction model based on CGSD parameters had high prediction accuracy, which can be widely used to predict yield of soybean grown in different production conditions. At the same time, the use of UAV and digital cameras improves the efficiency of image acquisition, while reduces the cost of image acquisition, which is more conducive to the popularization and application of this method.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:33
Main heading:Predictive analytics
Controlled terms:Antennas - Color - Digital cameras - Forecasting - Higher order statistics - Image acquisition - Image enhancement - Imaging techniques - Nitrogen fertilizers - Regression analysis - Unmanned aerial vehicles (UAV) - Video cameras
Uncontrolled terms:Color representation - Cumulative distribution - Digital-imaging technology - Distribution parameters - High resolution camera - Multiple stepwise regression - Quantitative description - Yield prediction models
Classification code:652.1 Aircraft, General - 723 Computer Software, Data Handling and Applications - 741.1 Light/Optics - 742.2 Photographic Equipment - 746 Imaging Techniques - 804 Chemical Products Generally - 922.2 Mathematical Statistics
Numerical data indexing:Percentage 8.00e+01%, Percentage 8.73e+01%, Percentage 9.13e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.014
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 2>
Accession number:20213210735479
Title:Predicting fuel consumption of grain combine harvesters based on random forest
Title of translation:谷物联合收割机油耗随机森林预测模型
Authors:Yang, Lili (1); Tian, Weize (1); Xu, Yuanyuan (1); Wu, Caicong (1)
Author affiliation:(1) College of Information and Electrical Engineering, China Agricultural University, Beijing; 100083, China
Corresponding author:Wu, Caicong(wucc@cau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:275-281
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Agricultural machinery is one of the important components of modern agriculture. In recent years, the number of agricultural machinery has continually increased, as well as the fuel consumption caused by agricultural machinery. The fuel consumption of agricultural machinery is directly related to agricultural production costs and the vital interests of farmers. Estimating the fuel consumption of agricultural machinery is of great significance in environmental governance, agricultural machinery operator evaluation, and agricultural cost input. Different from road vehicles, the factors affecting agricultural machinery fuel consumption seem to be more complex. Taking driving conditions for the only consideration cannot accurately predict the fuel consumption of agricultural machineries. Random forest, as a typical representative of ensemble learning, has many applications in various fields and has strong fitting ability for nonlinear data. It is widely used in the research of vehicle fuel consumption prediction. The purpose of this article is to realize the fuel consumption prediction of the grain harvester, WORLD 4LB-150AA, during working in the farmland. Based on the engine operating condition data and driving condition data collected by the harvester CAN(Controller Area Network) bus and GPS (Global Positioning System) terminal, seven indicators are constructed, including engine mean torque, engine mean speed, average speed, mean acceleration, mean deceleration, acceleration standard deviation and deceleration standard deviation. The acquisition frequency of the average data is 1.3 s, and the total number of the records is 130 788. Agricultural machineries that provided the data worked in six provinces including Liaoning, Jilin Province, Shandong, Jiangsu, Zhejiang, and Hubei. At the same time, by calculating the Spearman correlation coefficient, the correlations between seven indicators and fuel consumption were explored. According to China's agricultural divisions, the six provinces are divided into three regions: northeast region, plain region, and hilly region. Then, the fuel consumptions of the same grain harvesters in different regions were analyzed. Above the analysis, the fuel consumption prediction model of the harvester based on Random Forest was established, and compared with the one based on support vector machine. The results showed that the fuel consumption is correlated with all indicators. Among them, the fuel consumption is highly correlated to engine mean torque, engine mean speed and average speed, all with the correlation coefficient above 0.6, followed by mean acceleration, mean deceleration, acceleration standard deviation and deceleration standard deviation, whose correlation coefficients are above 0.4. There are significant differences in the fuel consumption of harvesters working in different regions. Among them, areas with high output per unit area are also relatively high in fuel consumption. Moreover, the Random Forest based model can realize the higher accurate prediction of fuel consumption during harvester operation. The root mean square error RMSE is 0.14, the average absolute error MAE is 0.24, and the coefficient of determination R<sup>2</sup> is 0.84. The proposed method in this paper can provide a reference for the optimization of working conditions of agricultural machinery and precise fuel consumption monitoring.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:31
Main heading:Combines
Controlled terms:Agricultural robots - Control system synthesis - Decision trees - Engines - Forecasting - Fuels - Global positioning system - Grain (agricultural product) - Harvesters - Mean square error - Predictive analytics - Random forests - Road vehicles - Statistics - Support vector machines
Uncontrolled terms:Agricultural productions - Cans (controller area network) - Coefficient of determination - Correlation coefficient - Engine operating conditions - Environmental governances - Gps (global positioning system) - Spearman correlation coefficients
Classification code:723 Computer Software, Data Handling and Applications - 731.1 Control Systems - 821.1 Agricultural Machinery and Equipment - 821.4 Agricultural Products - 922.2 Mathematical Statistics - 961 Systems Science
Numerical data indexing:Time 1.30e+00s
DOI:10.11975/j.issn.1002-6819.2021.09.031
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 3>
Accession number:20213210735554
Title:Dynamic task allocation method for the same type agricultural machinery group
Title of translation:同种农机机群动态作业任务分配方法
Authors:Wang, Meng (1); Zhao, Bo (1); Liu, Yangchun (1); Wei, Liguo (1); Wang, Fengzhu (1); Fang, Xianfa (1)
Author affiliation:(1) State Key Laboratory of Soil Plant Machine System Technology, Chinese Academy of Agricultural Mechanization Sciences, Beijing; 100083, China
Corresponding author:Fang, Xianfa(fangxf@caams.org.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:199-210
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">A multi-robot system often needs to change the robot's behavior in response to dynamic environments, particularly in the field of multiple tasks in sustainable agriculture. Dynamic task allocation is therefore an essential requirement to improve the overall system performance for the same group type of multiple agricultural machineries. However, some challenges remained on the agricultural machinery group to efficiently determine the task assignments under local observations in some unexpected conditions. In this study, a dynamic task allocation strategy was proposed for the same type of agricultural machinery group using an improved Contract Net Protocol (CNP). A cost function was established for the task assignment and performance using the maximum operating time in the longest machinery, the fuel consumption, and the distance on the road of the agricultural machinery group. A path planning was developed to combine the straight and the bypass in the field operation for the single and adjacent fields using the highest efficiency of agricultural machinery. A task bidding was constructed for the cost function of agricultural machinery referring to the CNP bidding process. Some specific approaches were utilized in the improved CNP to balance tasks with fewer server calculations, communication time, and non-operational distances, ranging from the selection of tenderee, the setting of the bidding threshold and the task redistribution for successful bidder to the task exchange between agricultural machinery. A systematic simulation of dynamic task allocation was carried out for the newly added tasks and the failure of agricultural machinery, where the operating time was taken as the operating cost, while the agricultural machinery with different performances was taken as the same group. A field experiment was implemented on the multi-machine cooperative dynamic task allocation at different times, where different numbers of tasks were used as original tasks, while some were used as new tasks in the newly added tasks. All tasks were selected as the original tasks in the failure of agricultural machinery. The simulated results showed that in the case of newly added tasks, the improved NCP performed 0.83%-8.05% lower than the traditional CNP, while the number of communications with the server was reduced by 80%-85%. In the case of failure of agricultural machinery, the improved NCP performed 1.77%-12.89% lower than the traditional CNP, while the number of communications with the server was reduced by 77.4%. The simulated data demonstrated that the improved NCP behaved a much better performance on the multi-machine cooperative dynamic task allocation, compared with the traditional NCP. Finally, the seeding operational data of a farm was selected to verify in the Hinggan League of Inner Mongolia of western China. The operation day was selected with the case of newly added tasks. A multi-machine cooperative dynamic task allocation was also performed on the improved CNP at various moments of task allocation on the daily operation. A systematic analysis was made to compare the work time of that day before and after dynamic task assignment using the improved CNP. Specifically, the improved CNP reduced the cost by 30.20%-34.09% under different times of dynamic task allocation, indicating a better performance and higher efficiency in the dynamic task allocation for precision agricultural production.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:36
Main heading:Agricultural machinery
Controlled terms:Agricultural robots - Cost functions - Efficiency - Multipurpose robots - Seed
Uncontrolled terms:Agricultural productions - Contract net protocols - Cooperative dynamics - Dynamic task allocation - Operational distances - Sustainable agriculture - Systematic simulation - Task re distributions
Classification code:731.5 Robotics - 821.1 Agricultural Machinery and Equipment - 821.4 Agricultural Products - 913.1 Production Engineering - 921.5 Optimization Techniques
Numerical data indexing:Percentage 1.77e+00% to 1.29e+01%, Percentage 3.02e+01% to 3.41e+01%, Percentage 7.74e+01%, Percentage 8.00e+01% to 8.50e+01%, Percentage 8.30e-01% to 8.05e+00%
DOI:10.11975/j.issn.1002-6819.2021.09.023
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 4>
Accession number:20213210735475
Title:Feasibility of variable rate spraying of centrifugal UAV using network RTK
Title of translation:基于网络RTK的离心式无人机变量施药可行性初探
Authors:Qi, Haixia (1, 2, 4); Zhou, Jingkang (1); Li, Chengjie (1); Chen, Pengchao (2, 3); Liang, Yu (1); Huang, Guizhen (1); Zou, Jun (1)
Author affiliation:(1) College of Engineering, South China Agricultural University, Guangzhou; 510642, China; (2) National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou; 510642, China; (3) College of Electronic Engineering, South China Agricultural University, Guangzhou; 510642, China; (4) Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou; 510642, China
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:81-89
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Unmanned Aerial Vehicles (UAV) have widely been served as a new technology in agricultural aviation plant protection and pest control in China. However, it is highly demanded to improve the accuracy of UAV spraying pesticides and operational efficiency. In this study, a Variable Rate Application (VRA) of a centrifugal spraying system was designed for precise placement and timing of pesticide application in specific field conditions using the network Real-Time Kinematic (RTK) technology. STM32F103 single-chip microcomputer was used for the core controller. A serial port of controller was selected to obtain the global position system (GPS) information. A network modular Data Terminal Unit (DTU) was connected to realize the network RTK technology via the serial port of the controller. The control voltage was used to tailor the speeds of the centrifugal nozzle motor and peristaltic pump. A Pulse Width Modulation (PWM) was used to control the output voltage from the core controller to the armature of motors in this system. The speed of the centrifugal nozzle motor determined the output droplet size, while the speed of peristaltic pump determined the flow rate and amount of spray fog. The agricultural dataset before the system working was acquired using the airborne multispectral camera, ground object spectrometer, and handheld GPS. A prescription chart was also constructed using ArcGIS software. In the system working, the GPS modular was used to capture the location longitude and latitude data for the system to read and analyze. The system was always real-time searching the UAV geographical position during the spraying pesticides process using the GPS data, and then matching the position using the orthogonal grid. Meanwhile, the system was used to real-time tailor the changes in the duty cycle of output PWM, after matching the decision of the prescription map. The speeds of the centrifugal nozzle motor and the peristaltic pump were then to control the particle size and application amount of the UAV. The operation data was finally uploaded to the monitoring platform for real-time display and storage. Several spraying experiments were carried out in the research base of South China Agricultural University in Zengcheng City, Guangdong Province of China. The airborne spraying device was carried on the MG-1p plant protection UAV using a developed DJI Drone. The sampling points of droplets were set to match the planting density of crops, where the water-sensitive paper was used to collect the droplet data. After UAV operation, the water sensitive paper was collected into a plastic bag for later use. An hp4678 scanner was selected to map the collected water-sensitive paper after the experiment. A DepositScan software was utilized to analyze the water-sensitive paper in each sampling area after image acquisition. A detailed dataset was obtained, including the deposition amount, coverage density, and particle size of droplets. The data demonstrated that the accuracy of the device was within 0.6 m at the operating speed of 2 m/s. The application effect was finally stabilized within 2.2 m in the case of continuous operation. Particle size varied smoothly in the boundary area of prescription, as the rotating speed of the centrifugal nozzle increased. The findings can provide a sound basis for the improvement of variable application technology of plant protection UAV in precise agriculture.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:31
Main heading:Aircraft control
Controlled terms:Agricultural robots - Agriculture - Antennas - Centrifugation - Controllers - Digital storage - Drops - Electric machine control - Global positioning system - Paper - Particle size - Particle size analysis - Pesticides - Plastic containers - Pulse width modulation - Pumps - Search engines - Speed - Spray nozzles - Unmanned aerial vehicles (UAV) - Voltage control
Uncontrolled terms:Geographical positions - Global position systems - Multi-spectral cameras - Operational efficiencies - Pesticide applications - Single chip microcomputers - Variable rate application - Variable rate spraying
Classification code:618.2 Pumps - 631.1 Fluid Flow, General - 652 Aircraft and Avionics - 652.1 Aircraft, General - 722.1 Data Storage, Equipment and Techniques - 723 Computer Software, Data Handling and Applications - 731.2 Control System Applications - 731.3 Specific Variables Control - 732.1 Control Equipment - 802.3 Chemical Operations - 803 Chemical Agents and Basic Industrial Chemicals - 811.1 Pulp and Paper - 817.1 Polymer Products - 821 Agricultural Equipment and Methods; Vegetation and Pest Control - 951 Materials Science
Numerical data indexing:Size 2.20e+00m, Size 6.00e-01m, Velocity 2.00e+00m/s
DOI:10.11975/j.issn.1002-6819.2021.09.010
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 5>
Accession number:20213210735493
Title:Effects of rice straw biochar with different adsorption characteristics on ammonia volatilization from paddy field and rice yield
Title of translation:不同吸附特性的稻草生物炭对稻田氨挥发和水稻产量的影响
Authors:Zhang, Feng (1); Liu, Chang (1); Wang, Zhe (1); Meng, Jun (2, 3); Chi, Daocai (1); Chen, Taotao (1, 2, 3)
Author affiliation:(1) College of Water Resources, Shenyang Agricultural University, Shenyang; 110866, China; (2) College of Agronomy, Shenyang Agricultural University, Shenyang; 110866, China; (3) National Institute of Biochar, Shenyang Agricultural University, Shenyang; 110866, China
Corresponding author:Chen, Taotao(taotao-chen@syau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:100-109
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Straw biochar has various positive effects on soil ecology and environment improvement, soil water and fertilizer conservation, and reduction of greenhouse gas emission. But the release of carbonate from some calcium carbonate (lime) can increase pH and hence ammonia (NH<inf>3</inf>) volatilization in paddy fields. In this study, a two-year potted experiment with a completely randomized design was conducted using different biochar for better ammonium absorption while less impact of lime. The japonica rice (Oryza sativa L. cv. Da Li Nuo) and biochar deriving from rice straw were taken as the research objects. Three pyrolysis temperatures (300, 500, and 700 ℃) and three levels of acidification (5, 7, and 9) were selected in the experiment. Prior to the test, the surface soil in the rice field was sampled and then air-dried. Biochar and basal fertilizer were applied one day before transplanting. The concentration of ammonium nitrogen in surface water, ammonia volatilization loss from a paddy field, and rice grain yield were measured under different biochar treatments during two rice growing seasons. The results showed that the ammonia volatilization loss ranged from 30.27 to 52.1 kg/hm<sup>2</sup> (2019), and 30.20 to 38.00 kg/hm<sup>2</sup> (2020) in the rice fields during rice growing stages, accounting for 15.14%-26.05% of N application rate (2019), and 15.1%-19.0% (2020). High temperature pyrolysis with acid and neutral straw biochar significantly reduced the volatilization loss. Biochar at pyrolysis temperature of 700 ℃, acidification level 5 (C700P5) presented the best performance of ammonia reduction, where the ammonia volatilization was significantly reduced by 22.93% and 12.61% in 2019 and 2020, respectively. The variation trend of NH<inf>4</inf><sup>+</sup>-N concentration in field water was consistent with that of NH<inf>4</inf><sup>+</sup>-N volatilization flux. The peak NH<inf>4</inf><sup>+</sup>-N concentration in surface water decreased by 16.90%-35.60% in the basal and tillering stages with acidic and neutral straw biochar. The acidic and neutral rice straw biochar significantly increased the rice aboveground N accumulation by 9.10%-24.84% at three pyrolysis temperatures. High temperature pyrolysis combined with acid and neutral biochar (C700P5, C700P7) presented a significant yield increase, where the yield increase rate was 9.92%-13.50%. The structural equation showed that there were positive and negative effects of biochar pH value and Cation Exchange Capacity (CEC) on ammonia volatilization, whereas, together with an obvious lime effect. But the obvious adsorption was to inhibit ammonia volatilization. In addition, the ammonia volatilization in the paddy field was significantly correlated with NH<inf>4</inf><sup>+</sup>-N concentration in surface water. The effect of biochar on ammonia volatilization was also described in the regulation on NH<inf>4</inf><sup>+</sup>-N concentration in surface water. The combination of neutral and acid improvement with high temperature pyrolysis biochar (C700P5, C700P7) increased production, because both ammonia volatilization and nitrogen loss reduced significantly, particularly providing for more sufficient nutrients for the growth of rice. This finding can offer the potential application of biochar at different pyrolysis temperatures and acidification levels in rice fields.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:41
Main heading:Surface waters
Controlled terms:Acidification - Ammonia - Calcium carbonate - Ecology - Fertilizers - Greenhouse gases - Lime - Nitrogen - Pyrolysis - Soil conservation - Soil moisture
Uncontrolled terms:Adsorption characteristic - Ammonia volatilization - Cation exchange capacities - Completely randomized designs - High-temperature pyrolysis - Positive and negative effect - Pyrolysis temperature - Structural equations
Classification code:444.1 Surface Water - 451.1 Air Pollution Sources - 454.3 Ecology and Ecosystems - 483.1 Soils and Soil Mechanics - 802.2 Chemical Reactions - 804 Chemical Products Generally - 804.2 Inorganic Compounds - 822.2 Food Processing Operations
Numerical data indexing:Percentage 1.26e+01%, Percentage 1.51e+01% to 2.61e+01%, Percentage 1.69e+01% to 3.56e+01%, Percentage 2.29e+01%, Percentage 9.10e+00% to 2.48e+01%, Percentage 9.92e+00% to 1.35e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.012
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 6>
Accession number:20213210735517
Title:Target detection and tracking system for orchard spraying robots
Title of translation:果园喷雾机器人靶标探测与追踪系统
Authors:Jiang, Shijie (1); Ma, Hengtao (1); Yang, Shenghui (1); Zhang, Chao (1); Su, Daobilige (1); Zheng, Yongjun (1, 2, 3); Kang, Feng (4)
Author affiliation:(1) College of Engineering, China Agricultural Universit, Beijing; 100083, China; (2) Engineering Research Center of Agricultural Equipment and Facilities, Ministry of Education, Beijing; 100083, China; (3) China Agricultural University YanTai Institute, Yantai; 264670, China; (4) School of Technology, Beijing Forestry University, Beijing; 100083, China
Corresponding author:Zheng, Yongjun(zyj@cau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:31-39
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Pesticide spray is a key session of management of fruit trees and air-assisted ground sprayers are commonly used in orchards. However, conventional equipment may lead to concerning issues, such as excessive chemical use and serious drifts. This study designed a target detecting and tracking system for an orchard spray robot, which was specifically developed for spraying onto middle and lower parts of fruit trees in terms of the background of stereo plant protection. The robot had an electric crawler chassis. A lifting implement and a spray bracket with nozzles were specifically designed. In terms of the control system, the system consisted of four main units: including a LiDAR detection unit, a height adjustment unit, an angle adjustment unit, and a control unit. For the LiDAR detection unit, RPLIDAR S1 was used to detect trees and measure the distance between the robot and the trees. For the height adjustment unit, a lifting implement combined with ranging finders was utilized to raise the height from the ground to the spray bracket, so that nozzles could reach the calculated height of target points in real-time. For the angle adjustment unit, electric pushrods with encoders were exploited to change the angle of the spray bracket, so that the nozzles could follow the calculated angle of target points. In terms of the control unit, STM32F429, a microcontroller, was applied. Based on the structure, three steps were developed for operation. First, general characteristics of fruit trees were analyzed based on the investigation in the orchards in Beijing, Shanxi, and Guangxi, and detection areas were determined based on these characteristics. Then, LiDAR was used to acquire the point clouds of canopies of the trees, and the polar distance and polar angle of target points were computed based on the maximum and minimum polar coordinate values of points of the trees. At last, the target points were obtained by the division and filtering of the point clouds in the areas, and the final distance and elevation angle of spraying were calculated based on these targets. Furthermore, the mathematical relations, one, between the elevation angle of the targets and the pulse number of the encoder, the other, between that and the stroke of pushrods, were established. Meanwhile, the measurement and control method of elevation angles of the spray bracket was designed. Incremental Proportion Integral Differential (PID) was applied for the angle variation so that elevation tracking could be achieved. To demonstrate the performance of the system, trials were conducted on the campus of China Agricultural University to acquire the data of Begonia canopies. The route of the robot was 'U' shaped and three trees were randomly selected for analysis. Results indicated that the system could adapt to different sizes and characteristics of canopies. Meanwhile, the targets of the three trees were concentrated on the height from 2.0 m to 3.5 m, which illustrates that the range could meet the requirement of spray onto middle and lower canopies of trees. Furthermore, the trials identified that the minimum spray elevation angle was 47.8°, while the maximum spray elevation angle was 51.4°. The maximum adjustment time of spray elevation angle between continuous targets was 0.06 s, which meant that the positioning of targets was fast. The system could offer a solid theoretical basis of the target following spray and the study could give a technical reference on chemical reduction, energy-saving, and drift decrease of plant protection in orchards.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:26
Main heading:Target tracking
Controlled terms:Agricultural robots - Chemical equipment - Energy conservation - Forestry - Fruits - Optical radar - Orchards - Proportional control systems - Robots - Signal encoding - Spray nozzles
Uncontrolled terms:Angle adjustments - Chemical reduction - Conventional equipment - Mathematical relation - Measurement and control - Plant protection - Target detecting - Target detection and tracking
Classification code:525.2 Energy Conservation - 631.1 Fluid Flow, General - 716.1 Information Theory and Signal Processing - 716.2 Radar Systems and Equipment - 731.1 Control Systems - 731.5 Robotics - 802.1 Chemical Plants and Equipment - 821.3 Agricultural Methods - 821.4 Agricultural Products
Numerical data indexing:Size 2.00e+00m to 3.50e+00m, Time 6.00e-02s
DOI:10.11975/j.issn.1002-6819.2021.09.004
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 7>
Accession number:20213210735546
Title:Estimation of rice leaf area index combining UAV spectrum, texture features and vegetation coverage
Title of translation:结合无人机光谱与纹理特征和覆盖度的水稻叶面积指数估算
Authors:Hang, Yanhong (1); Su, Huan (1); Yu, Ziyang (2); Liu, Huanjun (1, 2); Guan, Haixiang (1); Kong, Fanchang (1)
Author affiliation:(1) School of Public Administration and Law, Northeast Agricultural University, Harbin; 150030, China; (2) Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, Changchun; 130012, China
Corresponding author:Liu, Huanjun(liuhuanjun@neigae.ac.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:64-71
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Paddy rice as an important food crop is exactly determining the national food security in China. Leaf Area Index (LAI) is then an important indicator to evaluate crop growth and field management. Dynamic information of rice growth can be gained from the LAI with the accumulation of aboveground biomass and yield formation. The unmanned aerial vehicles (UAV)-based multispectral remote sensing technology can quickly capture the information on spatial variability of crops at the field scale, due mainly to its higher temporal and spatial resolution. The differences in rice growth can therefore be gained within the plots. As such, the vegetation indices can be used to estimate crop LAI. But there are still some saturated limitations when the LAI is large in estimating LAI. In this study, a rice LAI estimation model was constructed to investigate the ability of UAV with multiple indicators, combining spectral features, texture indices, and crop coverage. The UAV multispectral images were used to extract the spectral information, texture features, and crop coverage. A combination of different texture features, including the difference, ratio, and normalization, was calculated to obtain new texture indices, and further to improve the correlation between texture features and LAI. A one-dimensional linear model was built, where the spectral features, the texture index, and crop coverage were used as input quantities. Three types of indicators were integrated to construct a multiple stepwise regression and artificial neural network model, where the accuracy of combining multiple indicators was analyzed to estimate LAI. K-fold cross-validation was adopted to verify the present model. The results showed that there were significant correlations between six vegetation indices and rice LAI. All correlation coefficients were above 0.6 and ranked in a descending order, the Optimized Soil-Adjusted Vegetation index (OSAVI), Modified Triangular Vegetation Index 2 (MTVI2), Difference Vegetation Index (DVI), Green Normalized Difference Vegetation Index (GNDVI), Normalized Difference Vegetation Index (NDVI), and red-edge Chlorophyll Index (CI<inf>RE</inf>). The combined texture features showed that the correlation coefficient of a single texture feature with the highest correlation was 0.731 before the operation, while the texture index significantly improved the correlation between texture feature values and LAI. Specifically, the mean combination of Normalized Difference Texture Index (NDTI), Difference Texture Index (DTI), and Ratio Texture Index (RTI) presented a high correlation with LAI, where the DTI (mean<inf>5</inf>, mean<inf>1</inf>) between the near-infrared band mean and the blue band mean was the highest correlation of 0.830, 13.54% higher than that the near-infrared band mean of a single texture feature. The highest accuracy was gained in the differential texture index and crop coverage combining GNDVI, when estimating the rice LAI. The multiple stepwise regression model combining multiple indicators (R<sup>2</sup> =0.866, R<inf>adj</inf><sup>2</sup>=0.816, RMSE=0.308) was significantly higher than that of a single vegetation index (R<sup>2</sup>=0.603, R<inf>adj</inf><sup>2</sup>=0.563, RMSE=0.541), crop coverage (R<sup>2</sup>=0.633, R<inf>adj</inf><sup>2</sup>=0.596, RMSE=0.516) and the LAI model constructed with a single texture index (R<sup>2</sup>=0.668, R<inf>adj</inf><sup>2</sup>=0.635, RMSE=0.447). Better accuracy and some advantages of inversion were achieved to combine the spectral features, texture index, and crop coverage. The finding can provide a theoretical basis to estimate the structural parameters for the LAI of crops using the UAV platform in digital agriculture.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:31
Main heading:Textures
Controlled terms:Agricultural robots - Antennas - Crops - Food supply - Infrared devices - Neural networks - Regression analysis - Remote sensing - Tensors - Unmanned aerial vehicles (UAV) - Vegetation
Uncontrolled terms:Artificial neural network modeling - Correlation coefficient - Green normalized difference vegetation index - K fold cross validations - Multiple stepwise regression - Multispectral remote sensing - Normalized difference vegetation index - Normalized differences
Classification code:652.1 Aircraft, General - 821.4 Agricultural Products - 822.3 Food Products - 921.1 Algebra - 922.2 Mathematical Statistics
Numerical data indexing:Percentage 1.35e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.008
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 8>
Accession number:20213210735562
Title:Agricultural machinery scheduling optimization method based on improved multi-parents genetic algorithm
Title of translation:基于改进多父辈遗传算法的农机调度优化方法
Authors:Zhang, Fan (1); Luo, Xiwen (1); Zhang, Zhigang (1); He, Jie (1); Zhang, Wenyu (1)
Author affiliation:(1) College of Engineering, South China Agricultural University/Key Laboratory of Key Technology on Agricultural Machine and Equipment, Ministry of Education, Guangzhou; 510642, China
Corresponding author:Zhang, Zhigang(zzg208@scau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:192-198
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">A great challenge has posed on the trans-regional operation of agricultural machinery, including many tasks, strong timeliness and relatively fixed operational sequence. Therefore, a scientific and reasonable deployment scheme is still lacking to efficiently maintain operations for the production tracking in agricultural machinery in recent years. This study aims to optimize the cooperative scheduling for the multi-task and multi-machine assignment in a large-scale farm using improved multi-parent genetic algorithm (GA). Firstly, the basic locations of farmland and agricultural machinery were easily acquired and stored, as well as the task requirements distributed by farmers, with the aid of agricultural Internet of Things (IoTs) and navigation system. A multi-machine coupled with multi-tasks was analyzed, where the actual continuous operation was required to be completed in a specified time on a farm, for example, the tillage, sowing, and fertilization. Some parameters were also initialized, including the operation sequence, and the number of agricultural machines. A scheduling model was then built using a time window under the boundary conditions of multi-type machines, the distance of operation deployment, the time of preparation and operation. Besides, the model must also satisfy the following basic rules: 1) Each machine can only work on one field at the same time; 2) each field can be arranged with multiple agricultural machineries of the same task if the agricultural machineries are sufficient; 3) the operation sequence of different tasks in each farmland is fixed for the special operational procedures, while the tasks in each farmland must be executed. After that, taking the minimum operation time as the optimization goal, a feasible scheduling system was proposed using the improved multi-parent genetic algorithm (IMPGA) for the task planning. The field ID was used as the gene in the process of encoding, while the frequency of ID in the chromosome corresponded to the task of field, as the task process cannot be changed. Two parts were divided after generating primary population, remarked as the excellent and good group. The population propagates were then used in the strategy of multi-parent crossover, where a relatively superior individual was chosen from the excellent group to intersect with two good individuals chosen from the good one. The mutation probability was designed to be adjustable, when the fitness of optimal chromosome in the population cannot change after several iterations. Finally, the performance of Java-based IMPGA was verified using a series of real farmland datasets, randomly generated farmland tasks, and agricultural machinery in the Tacheng Prefecture of Xinjiang of western China. The MATLAB software was also used to generate job deployment. The experimental results showed both GA and IMPGA effectively performed the multi-task and multi-machinery assignments. Both GA and IMPGA achieved the optimal solution, when the number of fields was 5. The optimal and average solutions of IMPGA increased by 3.77% and 3.56%, respectively, when the number of fields was 10. The optimal and average solutions of IMPGA increased by 1.63% and 3.76%, respectively, when the number of farmlands was 15. The optimal solution and the average solution of IMPGA were improved by 4.46% and 3.47%, respectively, when the number of farmlands was 20. The total average quality of optimal and average solutions of IMPGA increased by 2.47% and 2.70%, respectively, indicating a better performance of IMPGA deployment. This finding can provide a reasonable scheduling scheme for the cross-regional operation of agricultural machinery in the production of the large-scale unmanned farms.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:31
Main heading:Genetic algorithms
Controlled terms:Agricultural machinery - Agricultural robots - Chromosomes - Farms - MATLAB - Navigation systems - Optimal systems - Scheduling
Uncontrolled terms:Cooperative scheduling - Internet of thing (IoTs) - Multi-parent crossover - Mutation probability - Operational procedures - Operational sequence - Reasonable deployment - Scheduling optimization
Classification code:461.2 Biological Materials and Tissue Engineering - 821 Agricultural Equipment and Methods; Vegetation and Pest Control - 821.1 Agricultural Machinery and Equipment - 912.2 Management - 921 Mathematics - 961 Systems Science
Numerical data indexing:Percentage 1.63e+00%, Percentage 2.47e+00%, Percentage 2.70e+00%, Percentage 3.47e+00%, Percentage 3.56e+00%, Percentage 3.76e+00%, Percentage 3.77e+00%, Percentage 4.46e+00%
DOI:10.11975/j.issn.1002-6819.2021.09.022
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 9>
Accession number:20213210735470
Title:Research advance on worldwide agricultural UAVs in 2001-2020 based on bibliometrics
Title of translation:基于文献计量学的2001-2020全球农用无人机研究进展
Authors:Li, Jiyu (1, 3); Hu, Xiaodan (1); Lan, Yubin (2, 3); Deng, Xiaoling (2, 3)
Author affiliation:(1) College of Engineering, South China Agricultural University, Guangzhou; 510642, China; (2) College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou; 510642, China; (3) National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology (NPAAC), Guangzhou; 510642, China
Corresponding author:Deng, Xiaoling(dengxl@scau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:328-339
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Unmanned aerial vehicles (UAVs) have become essential to promote the digital process of agriculture in the artificial intelligent era. This review aims to clarify the development trends of agricultural UAVs covering the cutting-edge technologies at the research frontiers, as well as technical achievement and transformation at home and abroad. Five typical databases were selected for the document searching, including Web of Science Core Proceedings, China National Knowledge Infrastructure, Derwent Innovations Index, the China National Intellectual Property Administration patent database, and World Intellectual Property Organization Patent Scope. A statistical analysis was performed on the journal articles and invention patents related to agricultural UAVs that published in the above-mentioned databases over the past 20 years from 2001 to 2020 using statistical and bibliometric indicators. Two keywords were used in the search strategy, including "drone" and "agriculture". Mathematical statistics showed that agricultural UAVs entered a high speed of research progress since 2014, and then reached a high fever period in 2016-2018. The literature publication per year is 1 232 with 1 970 patents in 2020. More than 4 000 papers and nearly 7 000 patents have been published in total. Europe has a relatively strong research strength for agricultural UAVs. China and the United States were the main research countries on agricultural UAVs, where paper outputs exceeded 45% (China accounts for 21.43% and the United States accounts for 24.60%) of the total, while Chinese, Japanese, and American companies contributed more patents. Particularly, Chinese patents accounted for around 80% of the total. Spain first laid the theoretical foundation for agricultural UAVs. Chinese and American institutions cooperated with each other relatively closely, followed by Australian institutions. The major publications in the field included Remote Sensing, Precision Agriculture and Computers and Electronics in Agriculture. In the past five years, Chinese studies were more active in this field. The subject areas of agricultural UAV papers were mainly related to remote sensing (30.22%) and environmental science (24.79%) in terms of bibliometrics and scientific mapping. The technical areas of patents were mainly related to aircraft and agriculture, forestry, animal husbandry, and fishery, specifically including B64D - aircraft for releasing substances (30.03%), B64C - aircraft for special purposes (23.66%), and A01M - liquid spraying equipment (21.01%). Both papers and patents on agricultural UAVs were divided into two aspects: UAV platform construction and agricultural applications. Agricultural applications were further divided into two major purposes: agricultural information collection and transportation. The current research frontier of agricultural drones lied in the mining of agricultural big data and the construction of agricultural digital models, whereas, the practical application was inclined to low-altitude remote sensing spraying and energy endurance. The latest research and technology application were "plant height" and "animal management and care". The application scenarios are ever-increasingly wider and richer with the rise of emerging technologies, such as 5G, Internet of Things, and blockchain. Meanwhile, there are still some difficulties and challenges on the deeper integration of technologies. UAVs can also serve the new promising applications in animal husbandry and fishery in the future.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:58
Main heading:Agricultural robots
Controlled terms:5G mobile communication systems - Agricultural implements - Aircraft - Animals - Antennas - Database systems - Drones - Fisheries - Forestry - Paper - Patents and inventions - Publishing - Remote sensing - Statistics
Uncontrolled terms:Agricultural informations - Artificial intelligent - Bibliometric indicators - Cutting edge technology - Environmental science - Technology application - Theoretical foundations - World intellectual property organizations
Classification code:652.1 Aircraft, General - 723.3 Database Systems - 811.1 Pulp and Paper - 821.1 Agricultural Machinery and Equipment - 903.2 Information Dissemination - 922.2 Mathematical Statistics
Numerical data indexing:Age 2.00e+01yr, Percentage 2.10e+01%, Percentage 2.14e+01%, Percentage 2.37e+01%, Percentage 2.46e+01%, Percentage 2.48e+01%, Percentage 3.00e+01%, Percentage 3.02e+01%, Percentage 4.50e+01%, Percentage 8.00e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.037
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 10>
Accession number:20213210735484
Title:Coverage operation path planning algorithms for the rape combine harvester in quadrilateral fields
Title of translation:四边形田块下油菜联合收获机全覆盖作业路径规划算法
Authors:Luo, Chengming (1, 2); Xiong, Chenwen (1); Huang, Xiaomao (1, 2); Ding, Youchun (1, 2); Wang, Shaoshuai (1)
Author affiliation:(1) College of Engineering, Huazhong Agricultural University, Wuhan; 430070, China; (2) Key Laboratory of Agricultural Equipment in Mid-lower Yangtze River, Ministry of Agriculture and Rural Affairs, Wuhan; 430070, China
Corresponding author:Huang, Xiaomao(huangxiaomao@mail.hzau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:140-148
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">To realize automatic planning and optimization of operation paths for the autonomous navigation of rape combine harvesters during the harvesting operation in unmanned farms, two sets of coverage path planning algorithms for arbitrary quadrilateral fields were proposed in this study through procedures including theoretical analysis, design of algorithms, programming, and simulation and evaluation using examples. First, the fundamental requirements of the coverage path planning problem for the unmanned operation process of the rape combine harvest were defined after analyzing the characteristics of the harvesting process (plant branches intertwined and required to be separated by vertical cutters to reduce harvest loss, and field boundaries generally not crossable in rice/rape rotation regions) and of the combine harvesters (small turning radii and strong mobility of crawler-type machines). Then, full coverage operation paths were generated based on an isometric offsetting process and the scanline filling algorithm, and scheduling optimization was performed using the OR-Tools. The two sets of global operation coverage path planning algorithms for rape harvesting in arbitrary quadrilateral fields included one for the "contour parallel" circular operation paths based on the one-sided vertical cutter header and the other for the "contour parallel + direction parallel" mixed operation paths based on the bilateral vertical cutter header. The latter algorithm first used contour parallel paths to harvest the field to make enough turning space for the machine, and then it used the direction parallel paths to complete the harvesting operation in the central area. Through the test and simulation using the data of four typical actual fields, the stability and reliability of the algorithms were verified, and the algorithm running time was between 0.17 s and 4.73 s, which meets the basic requirements of path planning of rape combine harvesting for unmanned farming. At the same time, compared with the circular operation paths, which are widely used in actual harvesting operation, the total operation length of mixed paths was smaller than that of the traditional circular paths when no optimization was performed and the turning radius was small, and the number of reverses in the operation process was reduced by 36.36%-40.00%. After optimization, the number of reverses was further reduced by 33.33%-60.87%, and the length of non-operation paths was reduced by 7.20%-20.23% compared with the path without any optimization. Results showed that the mixed paths resulted in a better operation effect than the traditional circular operation paths. This study could provide theoretical and technical support for the combined harvesting of winter rape in unmanned farming of rice/rape rotation in the middle and lower reaches of the Yangtze River in China.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:27
Main heading:Combines
Controlled terms:Harvesters - Harvesting
Uncontrolled terms:Autonomous navigation - Coverage path planning - Design of algorithms - Harvesting operations - Middle and lower reaches of the yangtze rivers - Path-planning algorithm - Scheduling optimization - Stability and reliabilities
Classification code:821.1 Agricultural Machinery and Equipment - 821.3 Agricultural Methods
Numerical data indexing:Percentage 3.33e+01% to 6.09e+01%, Percentage 3.64e+01% to 4.00e+01%, Percentage 7.20e+00% to 2.02e+01%, Time 1.70e-01s, Time 4.73e+00s
DOI:10.11975/j.issn.1002-6819.2021.09.016
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 11>
Accession number:20213210735547
Title:Exploration and development prospect of eco-unmanned farm modes
Title of translation:生态无人农场模式探索及发展展望
Authors:Lan, Yubin (1); Zhao, Denan (1); Zhang, Yanfei (1); Zhu, Junke (1)
Author affiliation:(1) School of Agricultural Engineering and Food Science, Shandong University of Technology, Zibo; 255000, China
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:312-327
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Agricultural production is always built on the basis of excessive pesticide and chemical fertilizer input in China, which leads to the imbalance of farmland ecological environment and is not conducive to the sustainable development of agriculture. Meanwhile, the shortage of agricultural population is becoming increasingly prominent, so it is imperative to seek an ecological, efficient and intelligent agricultural modes. Based on years of practice and exploration, the author's team built China's first eco-unmanned farm in Zibo, Shandong Province, and put forward the mode and development concept of "eco-unmanned farm". This article concluded that pesticides, fertilizers and soil farming methods have caused the most serious adverse effects on farmland ecosystems. To solve these problems, a series of unmanned operation methods and modes were used to carry out ecological management and transformation of the farmland ecosystem to realize the sustainable development of agricultural production. After that, functions of the automatic collection and processing of farmland information, scientific decision-making and remote control of unmanned agricultural machines were realized through the integration of air and ground agricultural information acquisition, ground-air integrated unmanned agricultural machinery cooperative operation, and the construction of a smart cloud brain capable of fully autonomous decision-making. Eco-unmanned farms cover two parts: ecological management and unmanned operations. The connotation of ecological management includes precise spraying of pesticides and fertilizers, ecological fertile soil and the construction of circular ecosystems. The connotation of unmanned operation includes intelligent perception of farm information, accurate analysis of big data, scientific decision-making with artificial intelligence, positioning and navigation of satellite systems, and collaborative operations between agricultural machinery. The eco-unmanned farm mode implements ecological management of farmland through unmanned operation methods, thereby organically combining ecological agriculture with unmanned farms. Traditional ecological agriculture cannot meet the development needs of modern high-efficiency agriculture. Therefore, unmanned operation methods for ecological management were applied to reduce the use of pesticides and fertilizers, and finally achieved a relatively ecological state. Precision spraying refers to spraying on-demand using unmanned ground and aerial precision spraying technology and equipment based on the spraying prescription map generated based on farmland crop information. The construction of a material-recycling farmland ecosystem is to recycle agricultural wastes such as straws through planting, breeding and farming, as well as increasing biodiversity. Ecological fertile soil refers to the use of ecological mechanization technology and methods to simplify farming, improve the ecological environment and soil structure, scientifically treat and efficiently use straw in the field, and reduce the use of pesticides and fertilizers. The unmanned farm is the ultimate form of "Replacing Humans with Machines". It has three basic elements: perception, decision-making and execution, which corresponds to the human nervous system. The Internet of Things replaces human perception organs; big data and artificial intelligence form a smart cloud brain, replaces the human brain; unmanned agricultural equipment replaces manned agricultural machinery that requires human limbs to participate in execution. The smart cloud brain is the most important "organ" of the unmanned farm and the development degree of the smart cloud brain determines the degree of the unmanned farm's intelligence. The eco-unmanned farm mode has application scenarios such as smart field, smart orchard, smart greenhouse, smart fishery, and smart pasture. However, its core idea is always the deep integration of ecological development concepts with intelligent equipment and information technology. The technical mode of eco-unmanned farm needs to be equipped with corresponding support systems according to local conditions in different application scenarios. The article summarized the key technologies and modes of eco-unmanned farms, and proposed the implementation connotation of the eco-unmanned farm mode, in order to provide valuable information for the development of future agriculture, smart agriculture, and the promotion of high-quality development of agricultural and rural modernization.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:113
Main heading:Farms
Controlled terms:Agricultural machinery - Agricultural robots - Agricultural wastes - Antennas - Artificial intelligence - Big data - Biodiversity - Decision making - Ecosystems - Fertilizers - Mechanization - Pesticides - Planning - Recycling - Remote control - Soils - Sustainable development
Uncontrolled terms:Agricultural equipment - Agricultural informations - Agricultural productions - Collaborative operations - Ecological agricultures - Ecological environments - Ecological managements - Exploration and development
Classification code:452.3 Industrial Wastes - 454 Environmental Engineering - 454.3 Ecology and Ecosystems - 483.1 Soils and Soil Mechanics - 601 Mechanical Design - 723.2 Data Processing and Image Processing - 723.4 Artificial Intelligence - 731.1 Control Systems - 803 Chemical Agents and Basic Industrial Chemicals - 804 Chemical Products Generally - 821 Agricultural Equipment and Methods; Vegetation and Pest Control - 912.2 Management
DOI:10.11975/j.issn.1002-6819.2021.09.036
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 12>
Accession number:20213210735478
Title:Fine-grained classification algorithm of fish feeding state based on optical flow method
Title of translation:基于光流法的鱼群摄食状态细粒度分类算法
Authors:Tang, Chen (1); Xu, Lihong (1); Liu, Shijing (2)
Author affiliation:(1) College of Electronics and Information Engineering, Tongji University, Shanghai; 201804, China; (2) Fishery Machinery and Instrument Research Institute, Chinese Academy of Fishery Sciences, Shanghai; 200092, China
Corresponding author:Xu, Lihong(xulihong@tongji.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:238-244
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">To solve the fine-grained classification of fish feeding state in the factory production environment, the fine-grained classification of fish feeding state in the factory production environment is beneficial to describe the fish feeding behavior in more detail. While current studies are mostly based on an ideal laboratory environment where external disadvantages are ignored such as light conditions and image quality, these studies can't be applied in the factory environment. Moreover, these studies focus on the binary classification of fish feeding state (eating or non-eating), which is imprecise. This study carried out the fine-grained classification of fish feeding state where a small-scaled fine-grained classified fish feeding state dataset was collected. Videos used to make this dataset were all captured in the factory production environment. There was a total of 752 videos in the dataset, each video was 3 s (90 frames) and labeled as non-eating, weak-eating, or strong-eating. Based on this dataset, a fine-grained classification algorithm of fish feeding state was proposed to solve the fish feeding state classification problem in the factory production environment. Firstly, this algorithm solved optical flow fields according to all consecutive frames in videos and calculating the moving magnitude and angle of pixels according to optical flow fields solved before. After that, the magnitude and angle were divided into eight intervals separately, and the histograms of pixels' magnitude and angle were counted in these eight intervals. The spliced magnitude and angle histogram was represented as an inter-frame motion feature in frame level for further classification. In this process, the algorithm turned a video sample into many inter-frame motion feature samples by calculating optical flow fields of all consecutive frames in the video. Then a 5-layer (one input layer, one output layer, and three hidden layers) classification neural network was built to classify inter-frame motion features extracted before. The classification network had three output categories corresponding to three different feeding states (non-eating, weak-eating, and strong-eating) and was optimized by a cross-entropy loss function, the output category probability was calculated by Softmax classification function. All inter-frame motion feature classification predictions were considered in the final video classification through voting strategy. The most frequent predicted frame-level category in all frames was considered as the video's probable category, a voting threshold was additionally set to ensure the frequency of the prediction. When the predicted frequency of the probable category was greater than the voting threshold, the video sample could be predicted as the corresponding probable category. Otherwise, the video sample would be predicted as the uncertain category. The frequency of prediction was proportional to the voting threshold. By setting a high voting threshold, the algorithm could output more reliable classification results. The experiment results showed the video accuracy of the algorithm was 98.7% under the 50% voting threshold. When the voting threshold increased to 80%, the video accuracy remained at 91.4% which proved the robustness of the algorithm. The video accuracy decreased with the increase of the voting threshold because a higher voting threshold needed more corresponding frame-level predictions and more videos might be predicted as the uncertain category due to the low frequency of prediction. Some comparative experiments were conducted to prove the effectiveness of the proposed algorithm. The experiments of texture-based algorithm and single frame convolutional neural network showed single frame features were not able to solve the fine-grained feeding state classification problem, which also proved the effectiveness of inter-frame motion features calculated in the proposed algorithm. Besides, the proposed algorithm got good performances in the small-scaled dataset collected before due to the inter-frame motion features extracted by optical flow method, it transferred the training data from video level to frame level which increased training samples implicitly. This study concentrated on the commercial recirculating aquaculture system thus could be better applied in the factory production environment. Moreover, it realized the fine-grained classification of fish feeding state, which could help describe the fish feeding behavior in more detail.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:31
Main heading:Classification (of information)
Controlled terms:Convolutional neural networks - Feeding - Fish - Flow fields - Forecasting - Graphic methods - Multilayer neural networks - Optical flows - Pixels - Textures
Uncontrolled terms:Classification algorithm - Classification functions - Classification networks - Classification results - Comparative experiments - Laboratory environment - Production environments - Recirculating aquaculture system
Classification code:631.1 Fluid Flow, General - 691.2 Materials Handling Methods - 716.1 Information Theory and Signal Processing - 741.1 Light/Optics
Numerical data indexing:Percentage 5.00e+01%, Percentage 8.00e+01%, Percentage 9.14e+01%, Percentage 9.87e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.027
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 13>
Accession number:20213210735482
Title:Development and experiment of the portable real-time detection system for citrus pests
Title of translation:便携式柑橘虫害实时检测系统的研制与试验
Authors:Wang, Linhui (1, 3); Lan, Yubin (2, 3); Liu, Zhizhuang (1); Yue, Xuejun (2, 3); Deng, Shuwei (1); Guo, Yijuan (1)
Author affiliation:(1) School of Intellgent Manufacturing, Hunan University of Science and Engineering, Yongzhou; 425199, China; (2) College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou; 510642, China; (3) National Precision Agriculture International Joint Research Center of Aerial Application Technology, Guangzhou; 510642, China
Corresponding author:Liu, Zhizhuang(liuzz168@126.com)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:282-288
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">In order to achieve a rapid, accurate and non-destructive detection of citrus pest infestation levels of the fruit trees, a real-time citrus pest detection system based on deep convolutional neural network was designed and developed in this study. The system was composed of a perception layer, a network layer and an application layer. The perception layer was responsible for the collection and identification of pest image data; the network layer was responsible for the data encoding, authentication and transmission between the detection instrument and the cloud server, and between the cloud server and the client; the application layer calculated the degree of damage resulted from pests, based on the number of pests in the target image, then the Beidou module was introduced to obtain the location information of the sampling points, and finally a visual pest heat map was generated. In order to obtain a pest recognition model suitable for the computing requirements of embedded devices, MoblieNet was preferred as the pest image feature extraction network. The regional candidate network generated the preliminary position candidate frame of the pests, and Faster Region Convolutional Neural Networks (Faster R-CNN) realized the classification and positioning of the candidate frame. The results showed that, compared with VGG16 and GoogleNet feature extraction network, MobileNet had the smallest parameter amount, only 15.147 M, and the Mean Average Precision (mAP) and Accuracy (ACC) indicators were 86.40% and 91.07%, respectively. Although the mAP and ACC of MobileNet were lower than VGG16 and higher than GoogleNet, the average time of MobileNet to process an image was 286 ms, which was much less than VGG16 (679 ms) and GoogleNet (459 ms). Considering comprehensively, MobileNet was used as the feature extraction network of the citrus pest detection model in this study. According to the detection effect of the two citrus pests, spider mite and aphids, the recognition rates were both high, reaching 91.0% and 89.0%, respectively, indicating that the sample features were selected correctly. From the perspective of counting accuracy, spider mite was 90.1%, and that of aphids was 43.8%. The main reason was that aphids were dense and obscure each other, and there was overlap so that it was difficult to label all pests when labeling. During the model training process, some aphids samples became negative samples and the accuracy rate was reduced. In addition, the pest distribution heat map had a small error, and directly displayed the degree of damage in different target point. The system realized the accurate identification and positioning of citrus pests, and provided an accurate information services for pesticide spraying operations.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:23
Main heading:Network layers
Controlled terms:Application Layer - Citrus fruits - Cloud computing - Convolution - Convolutional neural networks - Damage detection - Deep neural networks - Extraction - Feature extraction - Image processing - Information services - Orchards - Radio navigation - Signal detection
Uncontrolled terms:Candidate network - Detection instruments - Image feature extractions - Location information - Nondestructive detection - Pesticide spraying - Real-time detection - Recognition models
Classification code:716.1 Information Theory and Signal Processing - 716.3 Radio Systems and Equipment - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 802.3 Chemical Operations - 821.3 Agricultural Methods - 821.4 Agricultural Products - 903.4 Information Services
Numerical data indexing:Percentage 4.38e+01%, Percentage 8.64e+01%, Percentage 8.90e+01%, Percentage 9.01e+01%, Percentage 9.10e+01%, Percentage 9.11e+01%, Time 2.86e-01s, Time 4.59e-01s, Time 6.79e-01s
DOI:10.11975/j.issn.1002-6819.2021.09.032
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 14>
Accession number:20213210735513
Title:Efficient detection method for young apples based on the fusion of convolutional neural network and visual attention mechanism
Title of translation:融合卷积神经网络与视觉注意机制的苹果幼果高效检测方法
Authors:Song, Huaibo (1); Jiang, Mei (1); Wang, Yunfei (1); Song, Lei (1)
Author affiliation:(1) College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling; 712100, China
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:297-303
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Accurate detection of young fruits is critical to obtain growth data, particularly in the high-throughput and automatic acquisition of phenotypic information serving as the basis of fruit tree breeding. Since the fruits at young stage are in a small shape similar to the leaf color, it has made it difficult to be detected in deep learning. In this study, an improved YOLOv4 network model (YOLOv4-SENL) was proposed to achieve highly efficient detection of young apples in a natural environment. Squeeze-and-excitation (SE) and Non-local (NL) blocks were also combined to detect young apples. The backbone network of feature extraction in YOLOv4 was utilized to extract high-level features, whereas, the SE block was used to reorganize and consolidate high-level features in the channel dimension to achieve the enhancement of the channel information. The NL block was added to three paths of improved path aggregation network (PAN), combining non-local and local information obtained by convolution operations to enhance features. Two visual attention mechanisms (SE and NL block) were used to re-integrate high-level features from both channel and non-local aspects, with emphasis on the channel information and long-range dependencies in features. As such, the improved ability was achieved to capture the characteristics of background and fruit. Finally, the coordinates and classification were performed on the feature maps with different sizes of young apples. The pre-training weights of the backbone network on MS COCO dataset were loaded in the process of network training, where random gradient descent was used to update the parameters. The initial parameters were set as follows: The initial learning rate was 0.01, the training epoch was 350, the weight decay rate was 0.000 484, and the momentum factor was 0.937. A total of 3 000 images were collected in the natural environment, including young fruits in different periods and different interference factors, with abundant samples. Four indexes were selected to evaluate the detection of models in the experiments, including precision, the recall rate, F<inf>1</inf> score, and average precision. 1 920 images of the dataset were trained, where the average precision of network was 96.9% on 600 test set images, 6.9 percentage points, 1.5 percentage points, and 0.2 percentage points higher than that of SSD, Faster R-CNN, and YOLOv4 models, respectively. The size of the YOLOv4-SENL model was 69 M larger than that of the SSD model, 59 M smaller than that of the Faster R-CNN model, and 11M larger than that of the YOLOv4 model. It indicated that the detection of young apple objects was accurately realized. The ablation experiment on 480 validation set images showed that only retaining the SE block in YOLOv4-SENL, the precision of the model was improved by 3.8 percentage points, compared with the YOLOv4 model. Only retaining three NL block visual attention modules in YOLOv4-SENL, the precision of the model was improved by 2.7 percentage points, compared with the YOLOv4 model. When replacing the SE and NL blocks in YOLOv4-SENL, the precision of model was improved by 4.1 percentage points, compared with the YOLOv4 model. These indicated that two visual attention mechanisms contributed to significantly improving the perception of network for young apples with a small increase in parameters. This finding can provide a potential reference to obtain the growth information in fruit breeding.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:25
Main heading:Convolutional neural networks
Controlled terms:Behavioral research - Convolution - Decay (organic) - Deep learning - Fruits - Gradient methods - Image enhancement - Object detection - Orchards - Statistical tests - Trees (mathematics)
Uncontrolled terms:Ablation experiments - Automatic acquisition - Channel information - High-level features - Interference factor - Long-range dependencies - Natural environments - Visual attention mechanisms
Classification code:716.1 Information Theory and Signal Processing - 723.2 Data Processing and Image Processing - 801.2 Biochemistry - 821.3 Agricultural Methods - 821.4 Agricultural Products - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory - 921.6 Numerical Methods - 922.2 Mathematical Statistics - 971 Social Sciences
Numerical data indexing:Percentage 9.69e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.034
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 15>
Accession number:20213210735481
Title:Extraction of maize field ridge centerline based on FCN with UAV remote sensing images
Title of translation:基于FCN的无人机玉米遥感图像垄中心线提取
Authors:Zhao, Jing (1, 3); Cao, Dianlong (1, 3); Lan, Yubin (1, 3); Pan, Fangjiang (1, 3); Wen, Yuting (1, 3); Yang, Dongjian (1, 3); Lu, Liqun (2, 3)
Author affiliation:(1) School of Agricultural Engineering and Food Science, Shandong University of Technology, Zibo; 255000, China; (2) School of Transportation and Vehicle Engineering, Shandong University of Technology, Zibo; 255000, China; (3) International Precision Agriculture Aviation Application Technology Research Center, Shandong University of Technology, Zibo; 255000, China
Corresponding author:Lu, Liqun(luliqun@163.com)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:72-80
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">A Fully Convolutional Network (FCN) was herein proposed to extract ridge centerlines of a maize field from Unmanned Aerial Vehicles (UAVs) remote sensing images. A global path planning was selected for agricultural robots walking between rows of cornfields. The concept of ridge area (R-area) was constructed to offer further solutions, where an area was obtained by sweeping a straight line with a fixed width vertically on the centerline of the ridge. The R-area was semantically segmented to form a defined semantic range without clear boundaries. A dataset was designed to extract the centerline of farmland ridge, while the FCN was used to extract the R-area. The centerline of maize was manually annotated and rasterized in remote sensing images. As such, a threshold extraction was implemented to obtain the annotated image after Gaussian blur. The annotated and original images were divided into blocks using the sliding window. At the same time, these divided images were also trained. It was found that the accuracy rate (each model under the training of each width for the stitching test set image), recall rate, and the harmonic mean were 66.1%-83.4%, 51.1%-73.9%, and 57.6%-78.4%, respectively. The FNC model was then utilized to predict the image of the verification field after training. The model presented excellent robustness to predict complex situations, such as weeds between rows, uneven growth, and sprinklers above the crops. The image was then replaced according to the original position. Afterwards, the R-area distribution map was obtained. The projection division was performed on the R-area distribution map to acquire the centerline of the ridge. 19 339 slices were obtained using the segmented projection, where the number of slices was the same as the pixel height of the original maize remote sensing orthophoto. The center point of each ridge was obtained after projecting each slice. The center points were closely connected to collect a centerline distribution map, which was directly applied for agricultural robot navigation. An experiment was designed to explore the effects of line width for the R-area on the model training and centerline. The experiment also compared the image of the confusion matrix after model training with different line widths of different ridges. The accuracy of the model was trained with different line widths of ridges within different error ranges. At last, the final results demonstrated that the best performance of the model was obtained, when the line width was 9 pixels. Fluctuations of line thickness made the data lower. The optimal accuracy of the ridge centerline was 91.2% within the deviation range of about 77 mm, and 61.5% within the deviation range of about 31.5 mm. Extracting the centerline of the ridge was transformed into the semantic segmentation of R-area of UAV remote sensing images. The FCN network can be expected to segment the ridge and semantic region without obvious boundaries. This finding can offer semantic segmentation networks in deep learning to perform global path plans for agricultural robots in intelligent farming.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:36
Main heading:Remote sensing
Controlled terms:Agricultural robots - Agriculture - Antennas - Convolutional neural networks - Deep learning - Extraction - Hose - Image segmentation - Pixels - Robot programming - Robots - Semantics - Unmanned aerial vehicles (UAV)
Uncontrolled terms:Confusion matrices - Convolutional networks - Distribution maps - Global path planning - Optimal accuracy - Remote sensing images - Semantic segmentation - UAV remote sensing
Classification code:619.1 Pipe, Piping and Pipelines - 652.1 Aircraft, General - 731.5 Robotics - 802.3 Chemical Operations - 821 Agricultural Equipment and Methods; Vegetation and Pest Control
Numerical data indexing:Percentage 5.76e+01% to 7.84e+01%, Percentage 6.15e+01%, Percentage 6.61e+01% to 8.34e+01%, Percentage 9.12e+01%, Size 3.15e-02m, Size 7.70e-02m
DOI:10.11975/j.issn.1002-6819.2021.09.009
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 16>
Accession number:20213210735555
Title:Design and test of the BDS navigation system for trenchless pipe laying machines
Title of translation:无沟铺管机北斗导航控制系统设计与试验
Authors:Wang, Jizhong (1); Zhao, Bo (1); Zhao, Shimeng (1); Xing, Gaoyong (1); Wei, Liguo (1); Hu, Xiaoan (1)
Author affiliation:(1) Chinese Academy of Agricultural Mechanization Sciences, Beijing; 100083, China
Corresponding author:Zhao, Bo(zhaoboshi@126.com)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:47-54
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">In order to solve the problem of pipeline bending caused by slippage of trenchless pipe laying machine, a navigation control system of trenchless pipe laying machine was designed based on Real Time Kinematic-BeiDou Navigation Satellite System (RTK-BDS). According to the idea of multi-mode control algorithm, the multi-mode adaptive PID control algorithm of trenchless pipe laying machine was proposed. The analyses of the kinematic model and slip yaw model of tracked vehicles showed that the turning radius of vehicles will increase when slipping occurs; Through the analysis of the deviation model of vehicles, the calculation models of the heading deviation and lateral deviation of vehicles were given; Based on the analysis of the structure and related structural equations of the hydraulic motor of vehicles, the control transmission function of the hydraulic motor of trenchless pipe laying machine was obtained. The navigation control system of trenchless pipe laying machine mainly consists of BP (Back Propagation) neural network classifier, modal selector, knowledge base, adaptive comparator and walking controller. BP neural network classifier uses sensors to detect the running speeds of the left and right tracks, the actual average vehicle speed, the engine power and the oil pump pressure of the hydraulic motors on both sides of the trenchless pipe laying machine, so as to obtain the running state of the relevant modes of the vehicle, and input the sampled data to the computer. According to the model trained by BP neural network, the current state classification of vehicles is predicted. The modal selector can obtain the relevant parameters of the current vehicle state from the knowledge base through the classification results of BP neural network, and send them to the adaptive comparator and walking controller. The knowledge base contains the adaptive functions of the vehicle control system and the control parameters of positions. The adaptive comparator can calculate the weights of the two errors according to the current heading error and lateral error of the vehicle and the reference line, and compare the two weights. If the heading error weight is larger than the lateral error weight, the heading PID controller is selected to control the vehicle navigation, otherwise the lateral PID (Proportion-Integral-Differential) controller is selected to control the vehicle navigation. The vehicle modal control parameters and BP neural network training samples were obtained through the field test. The test results showed that the lateral overshoot was within 4.58 cm, the heading error reduced to ±3.7° and the lateral error was stable within ±2.6 cm. The linear control experiment results showed that the heading error of the control algorithm was within ±5.5°, which was within ±3° under 96.2% cases, and the lateral error was within ±2.6 cm, which was within ±1 cm under 89.6% cases. The engineering application test showed that the heading error could be kept within ±7° and the average heading error was within ±3.5° and the transverse error could be kept within ±4 cm. The control system can meet the construction requirements of trenchless pipe laying machine.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:22
Main heading:Trenching
Controlled terms:Adaptive control systems - Backpropagation - Comparator circuits - Comparators (optical) - Control system synthesis - Controllers - Errors - Hydraulic motors - Kinematics - Knowledge based systems - Linear control systems - Modal analysis - Navigation systems - Neural networks - Proportional control systems - Radio navigation - Three term control systems - Tracked vehicles - Vehicle transmissions
Uncontrolled terms:Beidou navigation satellite systems - BP (back propagation) neural network - BP neural network classifier - Classification results - Construction requirements - Engineering applications - Navigation control systems - Vehicle control system
Classification code:602.2 Mechanical Transmissions - 619.1 Pipe, Piping and Pipelines - 632.2 Hydraulic Equipment and Machinery - 663 Buses, Tractors and Trucks - 713.5 Electronic Circuits Other Than Amplifiers, Oscillators, Modulators, Limiters, Discriminators or Mixers - 716.3 Radio Systems and Equipment - 723.4 Artificial Intelligence - 723.4.1 Expert Systems - 731.1 Control Systems - 732.1 Control Equipment - 741.3 Optical Devices and Systems - 921 Mathematics - 931.1 Mechanics
Numerical data indexing:Percentage 8.96e+01%, Percentage 9.62e+01%, Size 4.58e-02m
DOI:10.11975/j.issn.1002-6819.2021.09.006
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 17>
Accession number:20213210735474
Title:Inter-row automatic navigation method by combining least square and SVM in forestry
Title of translation:最小二乘法与SVM组合的林果行间自主导航方法
Authors:Liu, Xingxing (1); Zhang, Chao (1); Zhang, Hao (1); Yang, Shenghui (1); Jiang, Shijie (1); Zheng, Yongjun (1, 2); Su, Daobilige (1); Wan, Chang (1, 3)
Author affiliation:(1) College of Engineering, China Agricultural University, Beijing; 100083, China; (2) Engineering Research Center of Agricultural Equipment and Facilities, Ministry of Education, Beijing; 100083, China; (3) College of Mechanical and Electrical Engineering, Tarim University, Alar; 843300, China
Corresponding author:Zheng, Yongjun(zyj@cau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:157-164
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Autonomous navigation has widely been served as agricultural working platforms in smart farming. A few kinds of sensors, such as GPS and camera, are commonly used as conventional. However, the automatic navigation cannot be extended suitable for orchard environment, due mainly to canopy closing and the variation of light intensity. In this study, an inter-row automatic navigation was developed for a track chassis using an Inertial Measurement Unit (IMU) and Light Detection And Ranging (LIDAR), thereby improving the capability of in-orchard navigation under agricultural working platforms. The track chassis was specifically developed for orchard conditions, including a chassis, a driving implement, a power implement, and a range extender, where the specific size was 1 575 mm×1 190 mm×1 355 mm. The detection and control systems were performed on a host and a slave computer. The host computer was in charge of data processing to obtain navigation paths and orders, while the slave one was to control motors using Pulse-Width Modulation (PWM). An SC-AHRS-100D2 was selected as the IMU in sensors and units, while RPLIDAR S1 was utilized as the LIDAR scanner. The orientation and pose of the platform were acquired under the IMU. Meanwhile, the orchard condition was also scanned by the LIDAR. First, the orientation and pose from the IMU were exploited to modify the data from the LIDAR, so that the platform remained in correct moving directions. Quaternions were transformed into Euler angles during the data processing. The tree lines on both sides were then extracted using least square, where an average line between two lines was calculated. Next, mathematical models were established to combine with the Support Vector Machine (SVM). An optimized classification line of environment between tree lines was computed as the navigation path of the track chassis platform, in order to ensure a maximum interval between the tree lines on both sides. Moreover, a Proportional-Incremental-Differential (PID) controller was employed to control the platform motion using the path information, where the lateral bias was selected as the evaluation standard. A series of field tests were conducted in the Bajiajiaoye Park (Dongsheng Street, Haidian District, Beijing), an apple orchard in Pinggu District, Beijing, and a citrus orchard in Guangan County, Sichuan Province of China. The data captured in the Bajiajiaoye Park was taken as the research case with several real conditions, where the trees were selected as the test environment. LIDAR was installed in the front of the track chassis, while each condition was tested three times. The speed of the chassis was 0.5 m/s. The results showed that the maximum size of the absolute value of lateral errors was 17.8 mm, and the maximum number of lateral errors was 107.7 mm. High performance was achieved in the automatic navigation, while the track chassis followed the central line between the fruit trees, according to the statistical values of lateral errors and the trajectory of the track chassis. Furthermore, excellent adaptability was also obtained for various situations. This finding can offer a potential technical reference on the wayfinding for the autonomous navigation of ground sprayers in orchards and forestry.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:25
Main heading:Optical radar
Controlled terms:Agricultural robots - Chassis - Computer control systems - Data handling - Errors - Forestry - Least squares approximations - Navigation - Orchards - Proportional control systems - Pulse width modulation - Support vector machines - Timber - Voltage control
Uncontrolled terms:Automatic navigation - Autonomous navigation - Chassis platforms - Detection and control systems - Evaluation standard - Inertial measurement unit - Light detection and ranging - Path informations
Classification code:662.4 Automobile and Smaller Vehicle Components - 716.2 Radar Systems and Equipment - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 731.1 Control Systems - 731.3 Specific Variables Control - 821.3 Agricultural Methods - 921.6 Numerical Methods
Numerical data indexing:Size 1.08e-01m, Size 1.78e-02m, Velocity 5.00e-01m/s
DOI:10.11975/j.issn.1002-6819.2021.09.018
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 18>
Accession number:20213210735490
Title:Development and experiments of the autonomous driving system for high-clearance spraying machines
Title of translation:高地隙施药机自动驾驶系统研制与试验
Authors:Yin, Xiang (1); An, Jiahao (1); Wang, Yanxin (1); Wang, Yingkuan (1, 2); Jin, Chengqian (1, 3)
Author affiliation:(1) School of Agricultural Engineering and Food Science, Shandong University of Technology, Zibo; 255000, China; (2) Academy of Agricultural Planning and Engineering, Ministry of Agriculture and Rural Affairs, Beijing; 100125, China; (3) Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing; 210000, China
Corresponding author:Jin, Chengqian(412114402@qq.com)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:22-30
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">This study aims to improve the automation and intelligence of high-clearance sprayers, while avoiding pesticide poisoning to operators. An unmanned high-clearance sprayer was therefore developed and then manufactured using state-of-the-art autonomous navigation, mechanical, electrical, and hydraulic technologies. A conventional high-clearance booming sprayer was selected to serve as the platform. The electrical system of the sprayer was composed of five sub-systems, including the driving control, navigation, remote control, spraying, and ground station. Electric devices were designed to realize automatic control of engine start/stop, four-wheel steering, throttle aperture, moving speed, spraying pump, and booming beams. A micro-controller PIC18F258 with CAN and serial ports was utilized to process data, and then send signals to the relays and motor drivers that rotated DC motors as executors. An electric steering was also developed, including the brushless motor, potentiometer, motor driver, and steering controller. The brushless motor was used to provide the steering torque, where the output shaft of the motor was connected directly to the input shaft of the hydraulic steering unit. A CAN-bus communication network was established to allow for the real-time switch between two modes, such as remote control and autonomous navigation. A dual-antenna RTK-GNSS receiver and an Inertial Measurement Unit (IMU) were used as navigation sensors to collect the positioning and attitude data. An attitude-based correction was proposed to compensate positioning measurements corrupted by the chassis inclination, thereby accurately acquiring the actual position of the sprayer. The RTK-GNSS positioning data was also utilized to calculate the actual minimum turning radius during the headland turn, particularly considering kinematic characteristics in fields with various soil conditions. A straight path also needed to be planned according to the turning radius, in order to ensure the explicit turning trajectory and accurate path tracking after finishing the headland turn. The reason was that the distance between adjacent working paths was considerably larger than that of the turning radius, where the working width was 12 m. An automatic calibration was introduced to determine the range of steering angle, steering angle in straight and heading measurement shift for the high-accuracy driving. The correction was also necessary to consider the installation of GNSS antennas and the potentiometer on different fixing locations with respect to the machine body. As such, a comprehensive validation was gained on the automatic operating mechanisms and CAN-bus network communication. A series of experiments were also conducted to evaluate the performance of newly-developed unmanned high-clearance sprayers under remote control and autonomous navigation, in terms of automatic operation in path tracking. The results showed that the maximum values were 20.81 and 8.84 cm under the remote control and autonomous navigation, with the average errors of 0.90 and 3.16 cm on the left, and the maximal root mean square errors of 7.47 and 2.66 cm, respectively, in terms of the lateral error, indicating that the executing mechanisms responded to operation commands in a stable and rapid way. The driving performance under autonomous navigation was much better than under remote control in agricultural spraying.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:24
Main heading:Electric machine control
Controlled terms:Agricultural robots - Automation - Automobile drivers - Automobile steering equipment - Banks (bodies of water) - Brushless DC motors - Controllers - Errors - Four wheel steering - Global positioning system - Mean square error - Navigation - Potentiometers (electric measuring instruments) - Potentiometers (resistors) - Remote control - Satellite antennas
Uncontrolled terms:Automatic calibration - Autonomous navigation - Can bus communications - Driving performance - Inertial measurement unit - Kinematic characteristics - Root mean square errors - Steering controllers
Classification code:407.2 Waterways - 432 Highway Transportation - 662.4 Automobile and Smaller Vehicle Components - 704.1 Electric Components - 705.3.2 DC Motors - 716 Telecommunication; Radar, Radio and Television - 731 Automatic Control Principles and Applications - 732.1 Control Equipment - 922.2 Mathematical Statistics - 942.1 Electric and Electronic Instruments
Numerical data indexing:Size 1.20e+01m, Size 2.08e-01m, Size 2.66e-02m, Size 3.16e-02m, Size 7.47e-02m, Size 8.84e-02m, Size 9.00e-03m
DOI:10.11975/j.issn.1002-6819.2021.09.003
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 19>
Accession number:20213210735515
Title:Position-velocity coupling control method and experiments for longitudinal relative position of harvester and grain truck
Title of translation:收获机与运粮车纵向相对位置位速耦合协同控制方法与试验
Authors:Zhang, Wenyu (1); Zhang, Zhigang (1); Luo, Xiwen (1); He, Jie (1); Hu, Lian (1); Yue, Binbin (1)
Author affiliation:(1) Key Laboratory of Key Technology on Agricultural Machine and Equipment, Ministry of Education, South China Agricultural University, Guangzhou; 510642, China
Corresponding author:Zhang, Zhigang(zzg208@scau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:1-11
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Intelligent robot system has become an essential development direction for managing a farm in the whole-process, all-day, and unmanned environment in smart agriculture. Therefore, it is necessary to cooperate with the harvester and grain truck to realize the autonomous operation in the harvest link. In this study, a longitudinal relative position cooperative control system was designed in the process of master-slave navigation harvesting and co-unloading grain, suitable for the trailer drive system with high nonlinearity. A parallel cooperative model of two machines was established to calculate the deviation of longitudinal relative position, where the relative position of harvester and grain truck was geometrically represented. A linear tracking was also utilized to control the transverse distance deviation, due to the fact that the harvester and grain truck separately planned the operation path. In longitudinal distance error, the throttle of the grain truck was used to adjust the longitudinal relative distance and further control the forward speed. A position-velocity coupling controller was designed to calculate the desired throttle, including a speed feedback Proportional Derivative(PD) controller and a position-velocity integrated decision bang-bang controller. The switch function of the bang-bang controller was derived from the dynamic features with good robustness. An open-loop second-order transfer function of throttle speed was generated from area identification to optimize the parameters of the controller. A simulated model of longitudinal relative position control was constructed to optimize the parameters of position-velocity coupling controller, according to the transfer function. A field experiment was conducted to verify the reliability of the model. Additionally, a comparison was also performed on the designed control system and traditional PD control. The simulation results showed that the designed control was fully adapted to the change of host speeds in practical operation, indicating better adaptability than the traditional PD. A two-machine cooperative navigation test was set to determine the adaptability and accuracy of longitudinal relative position control of position-velocity coupling in field operation. Both the harvester (Lovol Heavy Industry GE80S-H) and grain truck (Lovol Heavy Industry M1104) were installed on an electrically controlled chassis, to realize electronic steering and speed control of engines. Real-time kinematic and global navigation satellite systems (RTK-GNSS, K728 of Si Nan Company) were used to locate modules, with the location acquisition frequency of 10 Hz, and the accuracy of horizontal positioning ± (10+D×10<sup>-6</sup>) mm, where D is the distance between the base station and the mobile station, km. A wheel corner sensor (BEI-9902120CW) was used with the nonlinearity of ±2%, and A/D sampling accuracy of 12 bits. The switch actuator was Rexroth HT801053. Two sets of communication modules with 2.4 GHz frequency were used for the dual-machine communication (EBYTE company E34-DTU (2G4D20)), where module and control terminal were communicated via RS-232, and the control terminals were AGCS-I controllers with touch screens. The CAN bus was adopted to connect the control terminal with the chassis electronic control unit of the dual machine. This position-velocity coupling longitudinal relative position control was transplanted into the AGCS-I controller. Metrowerks CodeWarrior was adopted for ARM Developer Suite v1.2 development. Collaborative system experiments were conducted in a pilot field at the Lovol Arbos Intelligent Agriculture Demonstration Base. The experiment result showed that the longitudinal relative position deviation converged rapidly under the initial longitudinal deviation of 3, 7, and 10 m when the speed of the main engine was 1 m/s. The average adjustment time of system response was 7.73, 17.2, and 23.2 s, respectively. The average steady-state longitudinal relative position deviation was 0.091 8 m, and the standard deviation of steady-state longitudinal relative position deviation was 0 m, while the control accuracy of 1 173 suitable for the requirement of co-unloading grain, indicating excellent initial deviation adaptability. In addition, a wheat harvest test of the dual-machine cooperative system was carried out in Jinchang, Gansu Province of China. The performance of longitudinal relative position control with position-velocity coupling was obtained in the actual harvest operation. The field experimental results showed that the average steady-state longitudinal relative position deviation was 0.077 8 m, and the standard deviation of steady-state longitudinal relative position deviation was 0.091 3 m, indicating high cooperative accuracy in the need of harvest cooperative grain unloading. The finding can provide sound support for the high-precision independent system of harvest operation in smart farming.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:32
Main heading:Longitudinal control
Controlled terms:Agricultural robots - Automobile engines - Chassis - Control systems - Controllers - Electronics industry - Global positioning system - Grain (agricultural product) - Harvesters - Harvesting - Intelligent robots - Mergers and acquisitions - Mobile telecommunication systems - Navigation - Real time systems - Statistics - Touch screens - Transfer functions - Trucks - Unloading - Velocity
Uncontrolled terms:Autonomous operations - Communication modules - Cooperative navigations - Development directions - Global Navigation Satellite Systems - Machine communications - Proportional-derivative controllers - Relative position control
Classification code:661.1 Automotive Engines - 663.1 Heavy Duty Motor Vehicles - 663.2 Heavy Duty Motor Vehicle Components - 691.2 Materials Handling Methods - 722.2 Computer Peripheral Equipment - 722.4 Digital Computers and Systems - 731 Automatic Control Principles and Applications - 732.1 Control Equipment - 821 Agricultural Equipment and Methods; Vegetation and Pest Control - 921 Mathematics - 922.2 Mathematical Statistics
Numerical data indexing:Frequency 1.00e+01Hz, Frequency 2.40e+09Hz, Size 0.00e+00m, Size 1.00e+01m, Time 2.32e+01s, Velocity 1.00e+00m/s
DOI:10.11975/j.issn.1002-6819.2021.09.001
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 20>
Accession number:20213210735538
Title:Field road segmentation method based on improved UNet
Title of translation:田间道路改进UNet分割方法
Authors:Yang, Lili (1); Chen, Yan (1); Tian, Weize (1); Xu, Yuanyuan (1); Ou, Feifan (1); Wu, Caicong (1)
Author affiliation:(1) College of Information and Electrical Engineering, China Agricultural University, Beijing; 100083, China
Corresponding author:Wu, Caicong(wucc@cau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:185-191
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Automatic driving of agricultural machinery has drawn much more attention in recent years, particularly with the development of precision farming and the improvement of sensor technologies. Four parts of autonomous driving are positioning, perception, decision-making, and control system. In perception, the road recognition aims to extract the drivable area for the safe driving of agricultural machinery. However, there are no obvious lane markings or signs for field roads, while the road borders are in irregular shape, often shaded by trees. All of these features make it difficult for field road identification, unlike structured urban road. In road recognition, semantic segmentation on the collected road images is a binary classification task of background and road for each pixel to extract the drivable area. In this study, the data in spring and summer was collected in the Yufa Town, Daxing District, Beijing of China. A stereo camera was fixed on the agricultural machine to collect image data. The fixed position ensured that the camera was firm and reliable without being obscured during driving. The fixed height was set to 1.2 m. The driving speed of agricultural machinery was about 5 km/h during data collection. The field roads included semi-structured and unstructured roads. The sunny day was selected to collect data. The collecting time was about 4 hours, and a total of 1 600 pictures were captured. The training and test set were divided into the ratio of 4:1. The open- source software Labelme was used for image labeling. UNet was selected as the basic network, due to its simplicity and suitability for binary classification. A better performance was achieved when training on a small data set. Three improvements were also proposed for the UNet. 1) An identity mapping channel was established between every two convolutions, and the residual was constructed by adding pixels. The residual connection was used to alleviate the gradient disappearance and explosion during training, while easy the training of deep neural networks. 2) A fusion convolutional structure and the maximum pooling were established to replace the maximum pooling layer in the UNet. The useful information in the original image was maximized when halving feature map, where the segmentation of small area features was improved significantly. The inference time of the model was much longer because much more convolution operation increased the training parameters. 3) An asymmetric convolution structure was used in ACBlock, where the weight of the "skeleton" structure increased to improve the efficiency of feature extraction in the convolution kernel. Inspired by ACBlock, DACBlock was proposed using the dilated convolution, which further expanded the receptive field of the convolution feature map. ACBlock and DACBlock were used to replace the 3×3 convolution kernel in UNet. As such, the segmentation accuracy of road edge shapes was improved significantly. The hierarchical fusion and batch normalization were used in the inference stage to maintain that the number of parameters and inference time were all the same as the original structure. The improved UNet presented an IOU value of 85.03% for the field road segmentation, higher than the original UNet, ResUNet, and UNet3+. The recognition accuracy was relatively lower under cloudy weather in road junctions, due to insufficient light and occlusion. There was always water in the middle of the road after rain, where a certain degree of reflection occurred on the water under the mirror reflection. Therefore, the water increased the error of road segmentation. In the case of good or weak light in the evening and shade, the road segmentation was performed better for the safe driving of agricultural machinery. The segmentation accuracies of remote roads and road edges were also significantly better than those of other networks. Moreover, the average inference time of the model was 163 ms, meeting the time requirements of automatic driving in agricultural machinery.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:25
Main heading:Road and street markings
Controlled terms:Agricultural machinery - Agricultural robots - Agriculture - Automobile drivers - Cameras - Convolution - Data acquisition - Decision making - Deep neural networks - Image enhancement - Image segmentation - Information dissemination - Open source software - Open systems - Pixels - Roads and streets - Semantics - Stereo image processing
Uncontrolled terms:Agricultural machine - Binary classification - Convolution structure - Hierarchical fusions - Recognition accuracy - Road identifications - Segmentation accuracy - Semantic segmentation
Classification code:406.2 Roads and Streets - 432 Highway Transportation - 716.1 Information Theory and Signal Processing - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 742.2 Photographic Equipment - 821 Agricultural Equipment and Methods; Vegetation and Pest Control - 821.1 Agricultural Machinery and Equipment - 903.2 Information Dissemination - 912.2 Management
Numerical data indexing:Percentage 8.50e+01%, Size 1.20e+00m, Time 1.44e+04s, Time 1.63e-01s, Velocity 1.39e+00m/s
DOI:10.11975/j.issn.1002-6819.2021.09.021
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 21>
Accession number:20215111365867
Title:Real-time detection method of seafood for intelligent construction of marine ranch
Title of translation:面向海洋牧场智能化建设的海珍品实时检测方法
Authors: (1); (1); (1); (1); (1)
Author affiliation:(1) School of Information Engineering, Dalian Ocean University, Dalian; 116023, China
Corresponding author:
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:304-311
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract"></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:31
Main heading:Convolution
Controlled terms:Computer vision - Convolutional neural networks - Deep neural networks - Generative adversarial networks - Image enhancement - Instance Segmentation - Meats - Object detection - Object recognition
Uncontrolled terms:Convolutional neural network - Data informations - Deep learning - Depth separable convolution - Network models - Object detection method - Percentage points - Real- time - Seafood detection - YOLOv3
Classification code:461.4 Ergonomics and Human Factors Engineering - 716.1 Information Theory and Signal Processing - 723.2 Data Processing and Image Processing - 723.4 Artificial Intelligence - 723.5 Computer Applications - 741.2 Vision - 822.3 Food Products
Numerical data indexing:Percentage 7.00E+01%
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 22>
Accession number:20213210735503
Title:Posture change recognition of lactating sow by using 2D-3D convolution feature fusion
Title of translation:融合2D-3D卷积特征识别哺乳母猪姿态转换
Authors:Xue, Yueju (1); Li, Shimei (1); Zheng, Chan (2); Gan, Haiming (1); Li, Chengpeng (1); Liu, Hongshan (1)
Author affiliation:(1) College of Electronic Engineering, South China Agricultural University, Guangzhou; 510642, China; (2) College of Mathematics and Informatics, South China Agricultural University, Guangzhou; 510642, China
Corresponding author:Liu, Hongshan(hugoliu@scau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:230-237
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Posture change of lactating sow directly determines the preweaning survival rate of piglets. Automated recognition of sow posture change can make early warning possible to improve the survival rate of piglets. The frequency, type, and duration of sow posture changes can be expected to select the sows with high maternal quality as breeding pigs. But it is difficult to accurately recognize actions of sow posture change, due to the variety of posture changes, as well as the differences of range and duration of the movement. In this study, a convolutional network (2D+3D-CNet, 2D+3D convolutional Network) coupled with 2D-3D convolution feature fusion was proposed to recognize actions of sow posture change in-depth images. Experimental data was collected from a commercial pig farm in Foshan City, Guangdong Province of South China. A Kinect 2.0 camera was fixed directly above the pen to record daily activities of sows with a top view and a video frame of 5 fps. RGB-D video collection was conducted with a depth image resolution of 512×424 pixels. Median filtering and histogram equalization were used to process the dataset. The video clips were then fed into 2D+3D-CNet for training and testing. 2D+3D-CNet included spatiotemporal and spatial feature extraction, feature fusion, action recognition, and postures classification. This approach was adopted to fully integrate the video-level action recognition and frame-level posture classification. Firstly, 16-frame video clips were fed into the network, and then 3D ResNeXt-50 and Darknet-53 were used to extract the spatiotemporal and spatial features during sow movement. A SE module was added to the residual network structure of 3D ResNeXt-50, named 3D SE- ResNeXt-50, to boost the representation power of the network. The sow bounding box and the probability of posture changes were generated from the action recognition after feature fusion. The sow bounding box was then mapped to Darknet-53, where the 13<sup>th</sup> convolutional layer feature was processed for the sow regional feature maps. Next, the sow regional feature maps were fed into postures classification to finally obtain four probabilities of the posture. Considering the spatiotemporal motion and inter-frame postures variation during sow posture change, the action score was designed to indicate the possibility of posture change, and the threshold was set to determine the start and end time of a posture change action of a sow. Since the start and end time were determined, the specific posture change was classified via combining with the posture of sow one second before the start time, and one second after the end time. The method can be expected to directly recognize a specific posture change action of sow without a large number of datasets to be collected and annotated. The 2D+3D-CNet model was trained using PyTorch deep learning framework on an NVIDIA RTX 2080Ti GPU (graphics processing units), while the algorithm was developed on Ubuntu 16.04 platform. The performance of the algorithm was evaluated on the test set. The classification accuracies of lateral lying, standing, sitting, and ventral lying were 100%, 98.69%, 98.24%, and 98.19%, respectively. The total recognition accuracy of sow posture change actions was 97.95%, while the total recall rate was 91.67%, and the inference speed was 14.39 frames/s. The accuracies increased by 5.06 and 5.53 percentage points, and the recall rate increased by 3.65 and 5.90 percentage points, respectively, compared YOWO, and MOC-D Although 2D+ 3D-CNet's model size was bigger than FRCNN-HMM, it had some advantages in the accuracy, recall and test speed. The presented method can remove hand-crafted features to achieve real-time inference and more accurate action localization.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:31
Main heading:Convolutional neural networks
Controlled terms:Computer graphics - Convolution - Deep learning - Graphics processing unit - Image resolution - Large dataset - Mammals - Median filters - Program processors - Video cameras
Uncontrolled terms:Automated recognition - Classification accuracy - Convolutional networks - Histogram equalizations - Posture classification - Recognition accuracy - Representation power - Training and testing
Classification code:716.1 Information Theory and Signal Processing - 716.4 Television Systems and Equipment - 723.5 Computer Applications
Numerical data indexing:Percentage 1.00e+02%, Percentage 9.17e+01%, Percentage 9.80e+01%, Percentage 9.82e+01%, Percentage 9.87e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.026
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 23>
Accession number:20213210735505
Title:Optimization of the navigation path for a mobile harvesting robot in orchard environment
Title of translation:果园环境下移动采摘机器人导航路径优化
Authors:Hu, Guangrui (1); Kong, Weiyu (1); Qi, Chuang (1); Zhang, Shuo (1); Bu, Lingxin (1); Zhou, Jianguo (1); Chen, Jun (1)
Author affiliation:(1) College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling; 712100, China
Corresponding author:Chen, Jun(chenjun_jdxy@nwsuaf.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:175-184
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Obstacles such as large irregular fruit tree canopy and pedestrians are hidden dangers that hinder the movement of mobile picking robots in orchard environments. To improve the safety of mobile picking robot in the orchard environment and preventing the collision between mobile picking robot and obstacles, this study took the spindle-shaped apple orchard as the research object and proposed a mobile robot navigation path optimization method based on the improved artificial potential field. The mobile picking robot was composed of robot chassis, LiDAR (RS-LiDAR-M1, Suteng Innovation Technology Co., Ltd. China), the main controller (Jetson TX2, NVIDIA Co. Ltd., USA), and the picking arm, the main controller installed the ubuntu16.04 LTS operating system, and developed the program based on Robot Operating System (ROS) and Point Cloud Library (PCL). The main steps of the mobile robot navigation path optimization method based on the improved artificial potential field included point cloud preprocessing, extraction of ridgelines, and optimization of initial path. Firstly, the LiDAR carried by the mobile picking robot collected the three-dimensional point cloud between the rows of the orchard. The three-dimensional point cloud was processed through filtering, down-sampling filtering, and statistical filtering, and the ground plane algorithm removed the orchard ground point cloud and extract the orchard ridge and fruit tree canopy point cloud. Secondly, the Least Square Method (LSM), Hough Transform, and Random Sampling Consensus (RANSAC) extracted the ridgelines from the orchard ridge point cloud. The middle lines of the two sides of the ridgelines were used as the initial path. Finally, the artificial potential field was improved by discarding the gravitational potential field and established the potential field of the fruit tree canopy profile point cloud. The initial path was discretized into discrete points, and each discrete point was optimized in turn according to the improved artificial potential field, and then the optimized discrete points were fitted by quadratic B-spline curve to obtain the optimized path which had the ability to avoid large fruit tree canopy and pedestrian obstacles. The results of the extracted initial path by LSM, Hough transform, and RANSAC were analyzed in terms of real-time performance and anti-noise ability. The results showed that the three methods successfully extracted ridgeline and initial path, and RANSAC had the best real-time performance and the average processing time was about 0.147 × 10<sup>-3</sup> s, the standard deviation was 0.014×10<sup>-3</sup> s, and it had good anti-noise ability. Based on the extracted initial path by RANSAC, the improved artificial potential field method was used to optimize the initial path, which avoided the problem that the traditional artificial potential field method was easy to fall into oscillation. The shortest distance between the obstacle point cloud and the navigation path was increased from 0.156 m to 0.863 m, and the average optimization time and the standard deviation was 0.059 and 0.007 s, respectively, which indicated the optimization method basically had the ability to optimize the path in real-time to avoid obstacles. The method of optimizing the navigation path proposed in this study basically could meet the requirements of safety and real-time, which could provide a technical reference for the autonomous navigation of mobile picking robots in the orchard environment.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:36
Main heading:Orchards
Controlled terms:Curve fitting - Forestry - Fruits - Hough transforms - Least squares approximations - Mobile robots - Navigation - Navigation systems - Optical radar - Sampling - Statistics - Trees (mathematics)
Uncontrolled terms:Artificial potential field method - Artificial potential fields - Gravitational potential fields - Innovation technology - Mobile Robot Navigation - Quadratic B-spline curves - Robot operating systems (ROS) - Three-dimensional point clouds
Classification code:716.2 Radar Systems and Equipment - 731.5 Robotics - 821.3 Agricultural Methods - 821.4 Agricultural Products - 921 Mathematics - 922.2 Mathematical Statistics
Numerical data indexing:Size 1.56e-01m to 8.63e-01m, Time 5.90e-02s, Time 7.00e-03s
DOI:10.11975/j.issn.1002-6819.2021.09.020
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 24>
Accession number:20213210735494
Title:Fast online method and experiments of autonomous navigation robots for trellis orchard
Title of translation:棚架果园自主导航机器人快速上线方法与试验
Authors:Liu, Jizhan (1, 2); He, Meng (1); Xie, Binbin (1); Peng, Yun (1); Shan, Haiyong (1)
Author affiliation:(1) School of Agricultural Equipment Engineering, Jiangsu University, Zhenjiang; 212013, China; (2) Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education, Zhenjiang; 212013, China
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:12-21
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Trellis cultivation is a typical fruit tree planting, where a net-like shelf cover is formed on the top using a cross bar or lead wire with a support column to the bottom of a trellis, and the branches and vines are spread on the shelf. Grapes, pears, and kiwis are all suitable for planting in a trellis. In this study, taking a trellised vineyard as a research object, a fast online system of autonomous navigation was established to serve as one of the core technologies for orchard robots. The robots needed to drive autonomously and quickly when entering the rows of fruit trees or resuming operations on the way. It was therefore highly demanded to return independently the job line or online. Autonomous launching was widely utilized to evaluate the capabilities and performance of robot navigation. Nevertheless, the environment of the trellised orchard was seriously obstructing satellite signals. A natural shielding layer of satellite signals was also found under the dense tree canopy, as well as the arrangement of branches and vines in the trellised orchard. The shielding has made the navigation of satellite positioning unstable. An absolute satellite positioning was not suitable for the shed orchard. As such, the robots needed to autonomously perceive the actual environment, and then determine the subsequent pose. But there were most slender stems and sparsely planted stalks in the scaffolding in trellis structured orchard. Most autonomous navigation of agricultural machinery at present focused mainly on the local environmental characteristics of orchards. A great challenge still remained on the online performance of autonomous navigation, particularly on high quality and efficiency of operations. In this study, pose detection was proposed to realize the rapid launch of robots in the environment of scaffolding orchard using the relative positioning navigation, with emphasis on the fusion of electronic compass and LiAR heading. A priori scaffold orientation was input to the controller at the human-machine interface of the touch-sensitive serial screen, and then the electronic compass and LiDAR heading were combined to capture the precise pose of robots relative to the tree row, according to the dual indicators of pose deviation. The thresholds of body pose and state were classified to trigger the online trajectory program. Fast online was thus achieved with an optimal online angle. A self-developed grape robot was used as a test platform to carry out fast-on-line performance tests in a simulated trellised vineyard. The test results showed that the online time was 6.11, 7.15, 7.46, 7.74, and 8.9 s, respectively, while the online distance was 1.357, 1.367, 1.387, 1.383, and 1.403 m, respectively, under the constant speed of 0.3 m/s, the initial lateral deviation of 1.4 m, and the initial heading deviation of -π/4, -π/18, 0, π/18, and π/4. The optimal online crawler robot was achieved for short online time and distance in the field-to-row online positioning of an orchard. Angular implementation was also to quickly go online. Consequently, the robot can pose and go online quickly and stably using the planned path under the conditions of large initial lateral and heading deviation. Compared with the traditional path tracking, the online performance of the autonomous navigation system was improved significantly for the scaffold orchard, including the less online time and shorter online distance. The finding can provide a potential reference to the unmanned operation in scaffold orchards.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:29
Main heading:Human robot interaction
Controlled terms:Agricultural robots - Cultivation - Fruits - Navigation systems - Orchards - Reforestation - Satellites - Scaffolds - Shielding - Touch screens - Tracking (position)
Uncontrolled terms:Autonomous navigation - Autonomous navigation systems - Environmental characteristic - Human Machine Interface - On-line performance - Relative positioning - Satellite positioning - Scaffold orientations
Classification code:405.1 Construction Equipment - 655.2 Satellites - 722.2 Computer Peripheral Equipment - 731.5 Robotics - 821.3 Agricultural Methods - 821.4 Agricultural Products
Numerical data indexing:Size 1.40e+00m, Time 8.90e+00s, Velocity 3.00e-01m/s
DOI:10.11975/j.issn.1002-6819.2021.09.002
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 25>
Accession number:20213210735504
Title:Self-supervised pose estimation method for a mobile robot in greenhouse
Title of translation:基于自监督学习的温室移动机器人位姿跟踪
Authors:Zhou, Yuncheng (1); Xu, Tongyu (1); Deng, Hanbing (1); Miao, Teng (1); Wu, Qiong (1)
Author affiliation:(1) College of Information and Electrical Engineering, Shenyang Agricultural University, Shenyang; 110866, China
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:263-274
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Simultaneous localization and mapping (SLAM) play a vital role in implementing autonomous navigation of mobile robots in an unknown environment. Especially, visual odometry (VO) is a core component for a localization module in the SLAM system. The pose and velocity of a robot can, therefore, be estimated using computational geometry. Furthermore, the learning-based VO has gained great success in joint estimation camera ego-motion and depth from videos. In this study, a novel self-supervised VO model was proposed to realize the autonomous operation of a mobile robot in a greenhouse. The consistency constraint of temporal depth was also introduced for the learning framework using the binocular baseline supervision. Stereo video sequences were selected to train the model. The pose network after training was then used for pose estimation. A pre-test found that the stillness between video frames caused the prediction value of the model to shrink. Therefore, a soft mask was used in photometric re-projection error to remove the static region from the apparent difference measurement, and the non-rigidity scene and occlusion were further solved with normalized mask planes. Meanwhile, a new type of star dilated convolution (SDC) was also designed, where the filter was used to extract image features from the center 3×3 solid kernel and eight directions of 1-D kernel. The computational cost of SDC was thus less than that of the regular convolution of the same receptive field. Moreover, SDC was superimposed on spatial dimensions using depth-wise convolution with different dilation rates, particularly without the necessary to modify the existing deep learning framework. A convolutional auto-encoder (CAE) with residual network architecture was constructed using the SDC and inverse residual module (IRM), further serving as the backbone network for the VO model. With the aid of a binocular camera, the video sequences were collected in the solar greenhouses with tomato as the crop. The stereo video dataset was constructed to carry out the training and testing experiments. The static samples of the video sequence were removed from the image apparent difference measurement with a soft mask. The results showed that the mean relative errors (MREs) of translation and rotation estimation in the model were cut down by 5.06 and 11.05 percent point, respectively, while the mean square root errors (RMSE) were reduced by 24.78% and 30.65%, respectively. Once a normalized mask plane was utilized in the model to deal with non-rigidity scenes and occlusion, the MREs of translation and rotation estimation were reduced by 4.15 and 3.86 percent point, respectively. It inferred that both masks significantly improved the accuracy of the model. Meanwhile, the SDC-based IRM (SDC-IRM) reduced the MRE of rotation by 7.54 percent point under the unchanged network parameters. Since the SDC-IRM structure presented significant effectiveness in reducing model error, the increase of perceptive field was an effective way to improve the accuracy of the model. The RMSE of rotation estimation were reduced by 36.48%, respectively, whereas, the mean cumulative rotation error per hundred frames (MCRE) decreased by 54.75%, when the consistency constraint of temporal depth was used in the model, indicating high accuracy and stability of pose estimation. The MRE of rotation estimation was reduced by 7.30 percent point when extending the expansion factor of IRMs. The data demonstrated that the increase of receptive fields in the SDC kernel contributed to the higher accuracy of rotation estimation. Nevertheless, there was no longer obvious improvement of the model, when the maximum dilation rate was more than 6. More importantly, the calculation speed was up to 56.5 frames per second in the final pose estimation network. The MREs of translation and rotation estimation were 8.29% and 5.71%, respectively. The pose estimation performed better, compared with the previous VO model under similar input settings. This finding can provide sound support for the design of the navigation system for mobile robots in a greenhouse.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:36
Main heading:Computer vision
Controlled terms:Binoculars - Cameras - Computational geometry - Convolution - Deep learning - Errors - Greenhouses - Machine design - Mobile robots - Navigation - Navigation systems - Network architecture - Rigidity - Rotation - SLAM robotics - Statistical tests - Stereo image processing - Video recording
Uncontrolled terms:Autonomous navigation - Autonomous operations - Consistency constraints - Difference measurements - Mean square root error - Simultaneous localization and mapping - Stereo video sequences - Training and testing
Classification code:601 Mechanical Design - 716.1 Information Theory and Signal Processing - 716.4 Television Systems and Equipment - 723.2 Data Processing and Image Processing - 723.5 Computer Applications - 731.5 Robotics - 741.2 Vision - 741.3 Optical Devices and Systems - 742.2 Photographic Equipment - 821.6 Farm Buildings and Other Structures - 922.2 Mathematical Statistics - 931.1 Mechanics - 951 Materials Science
Numerical data indexing:Percentage 1.11e+01%, Percentage 2.48e+01%, Percentage 3.06e+01%, Percentage 3.65e+01%, Percentage 3.86e+00%, Percentage 4.15e+00%, Percentage 5.06e+00%, Percentage 5.48e+01%, Percentage 5.71e+00%, Percentage 7.30e+00%, Percentage 7.54e+00%, Percentage 8.29e+00%
DOI:10.11975/j.issn.1002-6819.2021.09.030
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 26>
Accession number:20213210735501
Title:Effects of NDVI time series similarity on the mapping accuracy controlled by the total planting area of winter wheat
Title of translation:NDVI时序相似性对冬小麦种植面积总量控制的制图精度影响
Authors:Li, Fangjie (1); Ren, Jianqiang (1); Wu, Shangrong (1); Zhang, Ningdan (1); Zhao, Hongwei (1)
Author affiliation:(1) Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing; 100081, China
Corresponding author:Ren, Jianqiang(renjianqiang@caas.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:127-139
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Generally, there is a problem of inconsistency between the total area of regional crops obtained from remote sensing technology and the statistical data of crop area, which affects the application of remote sensing-based crop spatial distribution information to a certain extent. To obtain high accuracy crop spatial distribution information consistent with the statistical data of crop area, a method for extracting and mapping winter wheat spatial distribution was proposed in this study based on threshold optimization of NDVI time series similarity under regional total planting area control, and the accuracy was verified. This study took Wuyi County, Hengshui City, Hebei Province as the study area, based on the Sentinel-2 NDVI data covering the whole growth period of winter wheat, the reference and actual cross-correlation curves were obtained by the Cross Correlogram Spectral Matching (CCSM) algorithm. On this basis, the root mean square error between the two curves was calculated, and a winter wheat extraction model was constructed. Then, using the Shuffled Complex Evolution-University of Arizona (SCE-UA) global optimization algorithm, the visual interpretation data of the regional winter wheat planting area was regarded as the reference for the winter wheat planting area extracted by remote sensing, and the optimal threshold in the winter wheat extraction model was obtained. Finally, according to the optimal threshold, the winter wheat was extracted by using the winter wheat extraction model. On this basis, a comparative analysis on the accuracy of winter wheat mapping results was carried out, which were extracted from the similarity of NDVI time series in the whole growth period and the similarity and similarity combinations of NDVI time series at different growth stages, respectively. The results showed that the regional crop mapping results using the similarity of NDVI time series throughout the whole growth period were excellent, and the total area accuracy was more than 99.99%, the overall accuracy and Kappa coefficient were 98.08% and 0.96, respectively. It was proved that the method could ensure the result consistency between the total area of regional crops obtained by remote sensing and the total amount of control reference data, and a higher recognition accuracy could be obtained. Seen from the crop distribution extraction results based on the similarity and similarity combinations of NDVI time series at different growth stages, the conclusions could be drawn that the NDVI time series from the seedling stage to the tillering stage before winter and from the reviving stage to the jointing stage could be used to obtain high accuracy crop distribution extraction results, while the NDVI time series from the heading stage to the maturity stage were used to extract winter wheat, the accuracy was low. Moreover, the comprehensive application of the similarity of NDVI time series at different growth stages was beneficial to the improvement of crop extraction and mapping accuracy to a certain extent. This study could provide a certain reference for the study on regional high-precision winter wheat mapping, as well as could provide a thought thread for obtaining large-scale, long-term, remote sensing-based regional crop spatial distribution information which was highly consistent with the statistical data of crop area.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:42
Main heading:Crops
Controlled terms:Data mining - Extraction - Global optimization - Mean square error - Remote sensing - Spatial distribution - Time series - Time series analysis
Uncontrolled terms:Cross correlogram spectral matching - Different growth stages - Global optimization algorithm - Remote sensing technology - Root mean square errors - Shuffled Complex Evolution - Threshold optimization - University of Arizona
Classification code:723.2 Data Processing and Image Processing - 802.3 Chemical Operations - 821.4 Agricultural Products - 921 Mathematics - 921.5 Optimization Techniques - 922.2 Mathematical Statistics
Numerical data indexing:Percentage 1.00e+02%, Percentage 9.81e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.015
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 27>
Accession number:20213210735564
Title:Motion planning method and experiments of tomato bunch harvesting manipulator
Title of translation:番茄串收机械臂运动规划方法与试验
Authors:Zhang, Qin (1); Liu, Fengpu (1); Jiang, Xianping (2); Xiong, Zheng (2); Xu, Can (2)
Author affiliation:(1) School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou; 510641, China; (2) Guangdong Institute of Modern Agriculture Equipment, Guangzhou; 510630, China
Corresponding author:Jiang, Xianping(39045644@qq.com)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:149-156
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Tomato picking environment is characterized by an unstructured space covering most obstacles, such as branches and vines. It is difficult to accurately express in a regular way, particularly for the relatively large volume of tomato bunches. Therefore, the motion planning of the robotic arm in a harvesting manipulator needs to consider how to pick tomato bunches, while avoiding obstacles after cutting the tomato bunches, and finally extracting them from a complex actual environment. Most previous motion planning of tomato picking focused mainly on the obstacle-free moving to the position of picking stalk. But only a few studies reported the specific fruit extraction with an increase in volume after the end-effector of robotic arm gripping the tomato bunch. Taking the tomato bunch picking cultivated in the greenhouse as the research object, real-time motion planning with collision-free Optimal Picking Space (OPS) was proposed here using space segmentation. A reasonable and effective space was also selected for the robotic arm to implement the picking task, in advance to avoid the failure caused by fruit collision or beyond the working range of the manipulator. The specific procedure was as follows. 1) Thousands of color pictures with tomato bunches were first collected. The YOLO-V3 model was used for training to obtain a better recognition network. An RGB-D camera was then used to capture the color and depth information of the environment. The trained YOLO-V3 model was to identify and locate the pixel position of the picking point for the tomato strings in the color map. Next, the internal and external parameters of the camera were cooperated to determine the three-dimensional position of the picking point for a tomato string. An improved density clustering was utilized to focus on the picking area near the picking point of the tomato bunch, while separate the multiple obstacles in the environment. A polynomial function was selected to fit the space curve of branch obstacles falling from top to bottom during the tomato cluster picking in an actual situation. The picking space was divided into multiple sub-spaces, according to the relative positions of branch obstacles and picking points. These sub-spaces, therefore, served as the basis to select the optimal picking space. 2) The volume of each sub-space was calculated to accommodate tomato bunches, while filter out the invalid narrow subspace. Correspondingly, a feasible configuration of the robot arm was achieved in a set of effective subspace, including the unfiltered subspaces, where the invalid subspace outside the working range was filtered out. An evaluation function was also formulated to comprehensively consider the path length and operational space of the robot arm in the joint space. Therefore, an optimal picking subspace was selected from the remaining effective subspaces via the evaluation function. 3) The optimal picking subspace was used as the guidance space of the path plan for the robot arm. Sensing and execution points were then set for the robot arm. Real-time obstacle-free factors were added into the motion planning of the robot arm. As such, the robot arm rapidly switched the modes between obstacle avoidance and attitude adjustment. Therefore, OPS was selected to guide the robotic arm for the real-time, obstacle-free, and non-destructive harvesting of tomato bunches. The optimal picking space greatly contributed to the highly efficient action of a manipulator. The average picking time of single bunch tomatoes using OPS simulation was 12.51s, reduced by 31.23% compared with the current mainstream RRT*-connect, while reduced by 53.23% compared with the Lazy-PRM*, and 19.29% reduction compared with the manual picking. The number of nodes in the OPS path was also reduced, compared with the random expansion node RRT*-connect and Lazy-PRM*. The success rate of motion planning was close to 100%, indicating a high-precision, real-time and rapid intelligent picking.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:30
Main heading:Robot programming
Controlled terms:Cameras - Collision avoidance - Color - End effectors - Fruits - Function evaluation - Harvesting - Industrial manipulators - Manipulators - Motion planning - Robotic arms - Robotics
Uncontrolled terms:Actual environments - Density clustering - Dimensional position - Evaluation function - Motion planning methods - Polynomial functions - Real-time motion planning - Relative positions
Classification code:731.5 Robotics - 731.6 Robot Applications - 741.1 Light/Optics - 742.2 Photographic Equipment - 821.3 Agricultural Methods - 821.4 Agricultural Products - 914.1 Accidents and Accident Prevention - 921.6 Numerical Methods
Numerical data indexing:Percentage 1.00e+02%, Percentage 1.93e+01%, Percentage 3.12e+01%, Percentage 5.32e+01%, Time 1.25e+01s
DOI:10.11975/j.issn.1002-6819.2021.09.017
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 28>
Accession number:20213210735492
Title:Design of the GNSS/INS integrated navigation system for intelligent agricultural machinery
Title of translation:智能农机GNSS/INS组合导航系统设计
Authors:Zhong, Yin (1); Xue, Mengqi (1); Yuan, Hongliang (1)
Author affiliation:(1) College of Electronics and Information Engineering, Tongji University, Shanghai; 201804, China
Corresponding author:Yuan, Hongliang(hyuan@tongji.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:40-46
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">An integrated navigation system aims to deal with autonomous navigation, positioning, motion control, and equipment calibration, generally combining two or more navigation devices on carriers. In this study, an intelligent integrated navigation system was therefore designed for agricultural machinery using Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS), in order to improve the positioning accuracy and reliability of automatic navigation in agricultural machinery. The system was derived from the angular velocity and acceleration of INS three-axis attitude, as well as the position and velocity of high-precision positioning board. The loose-coupling mode and real-time correction of INS error were adopted via a Kalman filter to collect the accurate position, velocity and attitude of agricultural machinery. The automatic navigation of agricultural machinery reduced labor intensity and costs for better profits, despite being one of the key technologies for the development of modern agriculture. Most automatic navigation of agricultural machinery currently used the GNSS and INS for positioning and navigation. But there is still a great challenge on some defects, such as the lack of GNSS signals, and the accumulation of INS errors over time. A control board of intelligent agricultural machinery was also manufactured to integrate the GNSS high-precision analysis and inertial measurement, where the integrated navigation program was implemented. The specific procedure was as follows. 1) To determine the project implementation plan and related devices; 2) To design an integrated navigation system, including the structure and mode of GNSS/INS, where the loose coupling mode was set; 3) To design hardware circuits in the PCB production and develop software program in C language under Keil5 IDE; 4) To test the positioning and navigation effects of the system in actual farmland and further verify the accuracy and stability of positioning and navigation system. In addition, a test platform of DF1004-2 intelligent agricultural machinery was established to perform in the Beidou field under static and linear motion. A comparison was also made on the single GNSS and combined navigation. The test results showed that there was little difference in the performance between single GNSS and integrated navigation, where the positioning error was less than 1 cm, and the attitude angle error was less than 0.1°, when the agricultural machinery was stationary. The system output simultaneously single GNSS and GNSS/INS integrated navigation information, when the agricultural machinery was driven in a preset straight path at a speed of 2 m/s. The position error of single GNSS navigation was less than 6 cm, and the attitude angle error was less than 1°, whereas, the position error of GNSS/INS integrated navigation was less than 3 cm, and the attitude angle error was less than 0.5°. There was a limited gain that was provided by the inertial measurement under stationary conditions, due to the zero-moving speed, in line with the theory of dead reckoning. In the case of movement, the GNSS/INS integrated navigation presented a greater accuracy than single GNSS navigation. The finding can provide strong support to high precision and automatic navigation control of machines in intelligent agriculture.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:30
Main heading:Global positioning system
Controlled terms:Agricultural machinery - Agricultural robots - Agriculture - Air navigation - Errors - Inertial navigation systems - Machine design - Radio navigation - Software testing
Uncontrolled terms:Global Navigation Satellite Systems - High precision positioning - Inertial measurements - Inertial navigation systems (INS) - Integrated navigation systems - Positioning and navigation systems - Project implementation plans - Real-time corrections
Classification code:431.5 Air Navigation and Traffic Control - 601 Mechanical Design - 716.3 Radio Systems and Equipment - 723.5 Computer Applications - 821 Agricultural Equipment and Methods; Vegetation and Pest Control - 821.1 Agricultural Machinery and Equipment
Numerical data indexing:Size 1.00e-02m, Size 3.00e-02m, Size 6.00e-02m, Velocity 2.00e+00m/s
DOI:10.11975/j.issn.1002-6819.2021.09.005
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 29>
Accession number:20213210735472
Title:Optimization and experimental verification of grain yield monitoring system based on pressure sensors
Title of translation:压力式谷物产量监测系统优化与试验验证
Authors:Geng, Duanyang (1); Tan, Delei (1); Su, Guoliang (1); Wang, Zongyuan (1); Wang, Zhiwei (1); Ji, Xiaoqi (1)
Author affiliation:(1) School of Agricultural Engineering and Food Science, Shandong University of Technology, Zibo; 255000, China
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:245-252
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">The information of grain yield distribution is one of the main information in digital agriculture, and its effective acquisition is of great significance for the process of grain harvesting. At present, domestic and foreign researchers had designed different types of grain yield monitoring systems with a variety of means. But due to the impact of measurement accuracy, model matching and other factors, these systems had not been effectively applied to the actual production in China. This study designed an online monitoring system based on the principle of grain flow pressure for monitoring grain yield. In this paper, the mathematical model of grain yield based on grain flow pressure was established, and the whole structure of the pressure-type grain yield monitoring system was determined. The monitoring system was mainly composed of grain flow monitoring device, positioning device, cutting table height control switch, core processor and human-computer interaction device, etc. Because the system proposed had the functions of sensor signal acquisition and processing, data display and storage, it realized real-time measurement, display and storage of grain yield in the process of grain harvesting. Based on the monitoring mathematical model between grain yield and grain flow pressure, a testbed was set up to simulate the actual operation of the end conveying auger of the grain collecting lifter of the grain combine harvests. The testbed was mainly composed of grain flow monitoring device, auger, feeding box, insert plate, three phase alternating current motor, reducer, stage and other parts. The Box-Behnken experimental design method was used to optimize the structural parameters of the grain flow monitoring device with the testbed. In this paper, the influences of the number of the sensors, the installation position of the sensors and the horizontal inclination angle of the monitoring device on the error of the grain yield monitoring system were studied. The optimal parameter combination was determined as follows: the number of the sensors was 5, the sensor installation position was 0.24 cm, and the horizontal inclination angle of the monitoring device was 5°. A verification test was carried out under the optimal working parameters. The experimental results showed that the measurement error of the grain yield monitoring system was 3.27%, which met the precision requirement of grain yield monitoring. In addition, the grain yield monitoring system was applied to field harvesting to verify its actual monitoring effect of yield. The field experimental results showed that the error of the field yield measurement was 5.28%. The grain yield monitoring data of the field experiment were filtered and interpolated, and the yield distribution map was finally generated. The yield distribution map could provide decision basis for subsequent variable sowing and fertilizer management. It can be concluded that the grain yield monitoring system had the characteristics of good versatility, convenient installation and high monitoring accuracy. This study could meet the urgent needs of grain yield monitoring in actual production, and had important practical significance for realizing intelligent and digital agriculture.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:23
Main heading:Monitoring
Controlled terms:AC motors - Agricultural robots - Augers - Computer control systems - Cutting - Data handling - Digital storage - Display devices - Errors - Grain (agricultural product) - Harvesting - Human computer interaction - Signal processing - Testbeds
Uncontrolled terms:Alternating current motors - Box-Behnken experimental design - Experimental verification - Fertilizer management - Installation position - On-line monitoring system - Optimal parameter combinations - Real time measurements
Classification code:502.2 Mine and Quarry Equipment - 705.3.1 AC Motors - 716.1 Information Theory and Signal Processing - 722.1 Data Storage, Equipment and Techniques - 722.2 Computer Peripheral Equipment - 723.2 Data Processing and Image Processing - 723.5 Computer Applications - 821.3 Agricultural Methods - 821.4 Agricultural Products
Numerical data indexing:Percentage 3.27e+00%, Percentage 5.28e+00%, Size 2.40e-03m
DOI:10.11975/j.issn.1002-6819.2021.09.028
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 30>
Accession number:20213210735508
Title:Design and application of the automatic precision feeding system of pond aquaculture
Title of translation:池塘养殖全自动精准投饲系统设计与应用
Authors:Tang, Rong (1); Shen, Yi (2); Xu, Peng (3); Yang, Jiapeng (1); Liu, Yabing (2); Liu, Xingguo (1)
Author affiliation:(1) Key Laboratory of Fishery Equipment and Engineering, Ministry of Agriculture and Rural Affairs, Fishery Machinery and Instrument Research Institute, Chinese Academy of Fishery Sciences, Shanghai; 200092, China; (2) Guangming Fishery Co., Ltd., Yancheng; 224153, China; (3) Zhongshan Chengyi Fishery Equipment Technology Co., Ltd., Zhongshan; 528441, China
Corresponding author:Liu, Xingguo(liuxg1223@163.com)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:289-296
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Current transportation of feed with high labor intensity has posed a great challenge on the popular aquaculture production in a timely manner at present, particularly on the pond culture. Furthermore, the quantity of delivered feed per run cannot be precisely controlled in the previous generation of widely-used feeding machines without a weighing device. In addition, the lacking of a digital control device has made feeding system difficult to be integrated into the management system in precise agriculture. The manual operation of feeding machines on site has not been feasible for large-scale farms, including a large number of ponds. In this study, a full-automatic precise feeding system was designed to improve the mechanical, automatic control, and information management in digital agriculture using cloud analytics. Four types of subsystems were firstly proposed, including mechanized operation, accurate measurement, automatic control, and digital integrated management, according to functional requirements in the need of large-scale pond aquaculture production. The system structure was based on the concept of "control was local and management was deployed in the cloud". A fully mechanized operation was realized using a large-capacity silo, and pneumatic conveyor to throw the feed. As such, the previous screw conveyor was replaced to carry the feed from the bulk truck to the silo. Three load cells were mounted on the bottom of support pillars to measure the weight of the silo. The remaining feed quantity was obtained by subtracting the weight of fixed mechanical parts from the total weight of the silo. The feed was firstly loaded into the delivery pipe through the unloader and then was blown to the inlet of the spreader through the air flow generated by the blower. A high-speed spreader was utilized to produce the centrifugal force for the spread of feed far away. A programmable logic controller (PLC) was also designed to accurately adjust the feed quantity of each feeding task, according to the preset parameters. The specific feed weight was also real-time collected from the weighing unit in the fully automated system of the feeding machine without manual operation. The local control center was constructed to realize the integrated management of the feeding task. The accurate docking of control and information management system was selected to implement the feeding control and production management. The feeding process was visualized to serve as efficient management tools using video monitoring equipment. The performance of the system was finally tested to achieve the designed goal. A demonstration base of full-automatic feeding and breeding was constructed with an area of 800 hectares in a large-scale breeding factory, including 90 feeding machines, 90 control cabinets, 5 zone control centers, and one management platform. An unmanned feeding was basically realized using the present system. The labor intensity was reduced by 70%, compared with the conventional feeding mode with a small feeding machine and manual operation. Real-time monitoring of feeding quantity was also realized in each pond for precise feeding, where the feed consumption was saved by 3%, indicating excellent performance. This finding can provide a promising application to the mechanized operation and automatic control system in large-scale pond culture.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:33
Main heading:Information management
Controlled terms:Agricultural robots - Agriculture - Automation - Conveyors - Digital control systems - Feeding - Fish ponds - Lakes - Process control - Spreaders - Weighing
Uncontrolled terms:Accurate measurement - Design and application - Efficient managements - Functional requirement - Information management systems - Integrated management - Production management - Programmable logic controllers (PLC)
Classification code:691.2 Materials Handling Methods - 692.1 Conveyors - 731 Automatic Control Principles and Applications - 731.1 Control Systems - 821 Agricultural Equipment and Methods; Vegetation and Pest Control - 821.3 Agricultural Methods - 943.3 Special Purpose Instruments
Numerical data indexing:Area 8.00e+06m2, Percentage 3.00e+00%, Percentage 7.00e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.033
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 31>
Accession number:20213210735512
Title:Design and experiments of the binocular visual obstacle perception system for agricultural vehicles
Title of translation:农业车辆双目视觉障碍物感知系统设计与试验
Authors:Wei, Jiansheng (1); Pan, Shuguo (1); Tian, Guangzhao (2); Gao, Wang (1); Sun, Yingchun (1)
Author affiliation:(1) School of Instrument Science and Engineering, Southeast University, Nanjing; 210096, China; (2) College of Engineering, Nanjing Agricultural University, Nanjing; 210031, China
Corresponding author:Pan, Shuguo(psg@seu.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:55-63
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Machine learning was efficiently incorporated to design a visual perception system for obstacle-free path planning in agricultural vehicles. The present system aims to ensure the safety and reliability of intelligent agricultural vehicles in the process of autonomous navigation. Hardware and software were mainly included in the system. The hardware consisted of visual perception and navigation control module. Since the visual perception task needed real-time image processing, the embedded AI computer Jetson TX2 was taken especially as the core of computing to operate. A deep Convolutional Neural Network (CNN) was used to identify agricultural obstacles. The complex structure and uneven illumination were considered in the agricultural environment, thereby enhancing stability in object detection. The CNN performance of environmental features was much better, compared with the traditional detection using artificially designed features. Moreover, better detection was achieved under continuous learning features in the current task from the large-scale dataset. The improved YOLOv3 was utilized to integrate object detection for the simultaneous output of all information, including category, location, and depth estimation. A binocular camera was used to capture the left and right images, all of which were firstly input into the improved YOLOv3 model for object detection. The output of the improved YOLOv3 model was used for object matching to complete obstacle recognition, where the relationship of obstacles was determined in the left and right images. The location of matching objects was then used to calculate the parallax of the obstacle between left and right images. Finally, the parallax of the obstacle was input into the binocular imaging model for depth estimation. The accuracy of depth estimation was improved, with the increase of model sensitivity to the X-axis of images. The mean error, mean error ratio, and mean square error of depth estimation were greatly improved, compared with the original YOLOv3 and HOG+SVM model. The experimental results showed that the embedded AI computer-processed images in real-time, ensuring the detection accuracy of the improved YOLOv3 model. In object detection, a highly accurate identification was achieved in the agricultural obstacles with an average accuracy rate of 89.54%, and a recall rate of 90.18%. In the first kind of obstacle, the mean error and mean error ratio of the improved YOLOV3 model were 38.92% and 37.23% lower than those of the original one, while 53.44% and 53.14% lower than those of the HOG+SVM model, respectively. In the second kind of obstacle, the mean error and mean error ratio of the improved YOLOV3 model were 26.47% and 26.12% lower than those of the original one, while 41.9% and 41.73% lower than those of the HOG+SVM model, respectively. In the third kind of obstacle, the mean error and mean error ratio of the improved YOLOV3 model were 25.69% and 25.65% lower than those of the original one, while 43.14% and 43.01% lower than those of the HOG+SVM model, respectively. In addition, there was no obvious change in the mean error, mean error ratio, and mean square error of the three models, when changing the distance between obstacle and vehicle. The average error ratio was 4.66% in the depth estimation of obstacles under the dynamic scenario, and the average time was 0.573 s. An electrically controlled hydraulic steering was also used in time for obstacle avoidance during depth warning. The findings can provide an effective basis for environment perception for agricultural vehicles in autonomous navigation. In the following research, the more lightweight YOLOv3-tiny model and the terminal processor Xavier with higher computing power can be selected to conduct the depth estimation, aiming to increase the real-time inference speed of visual perception system in modern agriculture.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:30
Main heading:Image enhancement
Controlled terms:Agricultural robots - Agriculture - Binoculars - Computer hardware - Convolutional neural networks - Deep neural networks - Errors - Feature extraction - Geometrical optics - Large dataset - Mean square error - Navigation - Object detection - Object recognition - Support vector machines - Vehicles - Vision - Visual servoing
Uncontrolled terms:Agricultural environments - Agricultural vehicles - Autonomous navigation - Environment perceptions - Environmental features - Hardware and software - Real-time image processing - Uneven illuminations
Classification code:722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 741.1 Light/Optics - 741.3 Optical Devices and Systems - 821 Agricultural Equipment and Methods; Vegetation and Pest Control - 922.2 Mathematical Statistics
Numerical data indexing:Percentage 2.56e+01%, Percentage 2.57e+01%, Percentage 2.61e+01%, Percentage 2.65e+01%, Percentage 3.72e+01%, Percentage 3.89e+01%, Percentage 4.17e+01%, Percentage 4.19e+01%, Percentage 4.30e+01%, Percentage 4.31e+01%, Percentage 4.66e+00%, Percentage 5.31e+01%, Percentage 5.34e+01%, Percentage 8.95e+01%, Percentage 9.02e+01%, Time 5.73e-01s
DOI:10.11975/j.issn.1002-6819.2021.09.007
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 32>
Accession number:20213210735507
Title:Method for detecting rice flowering spikelets using visible light images
Title of translation:基于可见光图像的水稻颖花开花状态检测方法
Authors:Zhang, Yali (1, 2); Xiao, Wenwei (1, 2); Lu, Xiaoyang (1, 2); Liu, Aimin (3); Qi, Yuan (1, 2); Liu, Hanchao (1, 2); Shi, Zekun (4); Lan, Yubin (2, 5, 6)
Author affiliation:(1) College of Engineering, South China Agricultural University, Guangzhou; 510642, China; (2) National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou; 510642, China; (3) Yuan Longping Agricultural High-tech Co., Ltd., Changsha; 410125, China; (4) College of Plant Protection, Hainan University, Haikou; 570228, China; (5) College of Electronic Engineering, South China Agricultural University, Guangzhou; 510642, China; (6) College of Artificial Intelligence, South China Agricultural University, Guangzhou; 510642, China
Corresponding author:Lan, Yubin(ylan@scau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:253-262
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Rice flowering spikelets bloom generally at 10:00-12:00, especially when the temperature is 24-35 ℃ and the relative humidity is 70%-90%. Therefore, the flowering time is necessary to be accurately determined for the timely pollination in the production of hybrid rice seed. In this study, the images were captured by a visible light camera at two flowering characteristics, including the opening of spikelet hull, and the emesis of spikelet anthers. Series Otsu (SOtsu) was applied in tandem to extract the spikelet anthers through the visible light blue channel. An attempt was made to detect the flowering status of rice glumes using visible images, in order to meet the needs of hybrid rice seed pollination. A Canon single-lens reflex (SLR) camera was adopted for data acquisition, which was a benefit to segment the image using the tandem SOtsu. Deep learning models, such as FasterRCNN and YOLO-v3, were used to identify the spikelet anthers and the opening spikelet hull. The most suitable method was selected for flowering characteristics detection to compare the precision, recall, and the F1 coefficient of different models. Two datasets of visible light images were set for spikelets (15 cm and 45 cm imaging distance), each of which used two characteristics. A labeling software was applied to label the category and position of images. As such, a sample database was established for the training of detection models with deep learning. The performance of three models, including SOtsu, FasterRCNN, and YOLO-v3, were evaluated, where the detection was verified from multiple angles. The experiment was also conducted for the model robustness as well. The maximum inter-class variance was utilized in the SOtsu to separate the foreground (rice) from the background using the grayscale image of B-channel, where the grayscale of the background was set to be zero. An analysis was then made for the maximum inter-class variance that applied independently in the pixel range of extracted region, and then the spikelet anthers were further separated from the spikelets hull. The original gray values of spikelet anthers were retained, while the gray values of spikelet hull were set to be zero. Finally, the extraction was evaluated to combine with original images and the number of connected areas that were calculated by the eight-connected output as well. The results showed that the precision, recall rate, F1 coefficient and Pearson correlation coefficient of FasterRCNN model in spikelet hull detection were 1, 0.97, 0.98, and 0.993, respectively, while those of SOtsu in spikelet anthers detection were 0.92, 0.93, 0.93, and 0.936, respectively. It inferred that the SOtsu and FasterRCNN models were both capable of rice flowering detection, but the opening spikelet hull was more suitable than the spikelet anthers for the rice flowering features detection with deep learning model. The model robustness indicated that the highest stability was achieved in the FasterRCNN model to identify the spikelet flowering status with high precision under low, high and uneven light conditions. In addition, the spikelet anthers that opened on the same day split and pollened in 3-5 min, and withered on the same day. There was no recognition significance in the withered spikelet anthers without pollen. It was also necessary to verify the classification ability of detection for the withered anthers, in order to avoid the wrong identification of withered anthers. The SOtsu performed well in the image segmentation using the gray value for the withered spikelets anthers. The SOtsu was also better than FasterRCNN in the identification of withered spikelets. Correspondingly, the SOtsu was expected to replace the FasterRCNN model for the flowering spikelet before the completion of model construction, in order to ensure the detection continuity of rice flowering spikelet. The influencing factors of recognition were reduced to control the process of detection. Since the segmentation was processed for morphological opening operations, there was also some limitation in the recognition of overlapping anthers. A further study can be followed by a more in-depth exploration of high-throughput detection.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:35
Main heading:Learning systems
Controlled terms:Cameras - Correlation methods - Data acquisition - Deep learning - Image segmentation - Light
Uncontrolled terms:Classification ability - Features detections - High-throughput detection - Model construction - Morphological opening - Pearson correlation coefficients - Single lens reflexes - Visible light images
Classification code:723.2 Data Processing and Image Processing - 741.1 Light/Optics - 742.2 Photographic Equipment - 922.2 Mathematical Statistics
Numerical data indexing:Percentage 7.00e+01% to 9.00e+01%, Size 1.50e-01m, Size 4.50e-01m, Time 1.80e+02s to 3.00e+02s
DOI:10.11975/j.issn.1002-6819.2021.09.029
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 33>
Accession number:20213210735523
Title:Recognition and the optimal picking point location of grape stems based on deep learning
Title of translation:基于深度学习的葡萄果梗识别与最优采摘定位
Authors:Ning, Zhengtong (1); Luo, Lufeng (1); Liao, Jiaxin (1); Wen, Hanjin (1); Wei, Huiling (1); Lu, Qinghua (1)
Author affiliation:(1) School of Mechatronics Engineering and Automation, Foshan University, Foshan; 528000, China
Corresponding author:Luo, Lufeng(luolufeng617@163.com)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:222-229
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Automatic recognition, segmentation, and location of grape stems' picking points are important aspects for the picking operation of grape-picking robots. In the actual scene of an orchard, it is extremely difficult to accurately identify and segment grape stems and then locate the picking point, due to strong similarity between stems and the surrounding environment, as well as other conditions such as weather, light, and occlusion. Therefore, big challenges are posed for the grape-picking robots to perform picking operations. Recognition and the optimal picking point location of grape stem based on deep learning were proposed in this study. Considering that shape of small grape stems and their color would gradually change, a Mask Region with Convolutional Neural Network (Mask R-CNN) instance segmentation model was optimized. This model was divided into three modules, including backbone, region proposal network, and three branches. The backbone network aimed at obtaining a feature map with different levels. The regional proposal network's target was to find regions with grape stems. And the three-branches network aimed at obtaining classification, bounding-box regression, and mask calculation of grape stems. The result of recognition and segmentation of grape stem in pixel-level was obtained through the training model, with the category and position of the grape stem were returned. To improve the segmentation effect on grape stems, the study adopted the idea of color threshold segmentation, the HSV (Hue, Saturation, Value) color space of each grape stem in the segmentation result was analyzed in segments. The average value of HSV color components of each segment was taken as the benchmark color threshold of the stem in this segment. Based on this threshold, an improved regional growth algorithm was introduced to automatically adjust and optimize the shape of the segmented grape stem. By this optimized shape, the centroid of the grape stem was calculated, the picking area was determined by the two horizontal sides of the grape stem that were closest to the centroid point, and the midpoint of this area was considered as the picking point. Approaches in this study were stated as follows. Grape stem regions of training samples were manually labeled, 600 images were selected as the training set, and 100 images as the verification set. In addition, data of the training set was enhanced by taking into consideration rotation, mirroring, blurring, and exposure operations, to improve the generalization ability of the model. A total of 3 000 training set images were generated. All the above measures contributed to the optimization of grape stem recognition and segmentation network based on the Mask R-CNN. An improved region growth algorithm was initiated to finely adjust results from multiple segments of grape stem segmentation, and the picking point was obtained based on the relationship between centroid and contour of the grape stem. The specific performance of the method under different weather and illumination image conditions was verified. The detection accuracy value and the location rate of the optimal picking point were taken to evaluate the models, and the detection effects before and after the model optimization were compared. Experimental results showed that detection accuracy in the optimized model reached up to 88%. Compared with the model before optimization, the model detection time was reduced by 24%. The success rate of picking point location taking the method of this study was 81.58%. And calculated picking points reached up to 99.43% within the optimal picking range which was manually set. Results showed that the proposed method recognized multiple types of grape stems under different weather and light conditions, results of grape stem region segmentation were also satisfactory. This method was capable to locate the picking point quickly and was suitable for the grape-picking robots to perform picking operations in an orchard with a complex environment. In this study, the deep learning model could be applied for the first time in the research of stem identification and segmentation and it could be an approach for grape-picking robots to pick grapes efficiently and intelligently in the natural environment.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:29
Main heading:Deep learning
Controlled terms:Color - Convolutional neural networks - Location - Orchards - Robots
Uncontrolled terms:Automatic recognition - Color threshold segmentations - Complex environments - Generalization ability - Natural environments - Segmentation models - Segmentation results - Surrounding environment
Classification code:731.5 Robotics - 741.1 Light/Optics - 821.3 Agricultural Methods
Numerical data indexing:Percentage 2.40e+01%, Percentage 8.16e+01%, Percentage 8.80e+01%, Percentage 9.94e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.025
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 34>
Accession number:20213210735563
Title:Recognition and segmentation of maize seedlings in field based on dual attention semantic segmentation network
Title of translation:基于双注意力语义分割网络的田间苗期玉米识别与分割
Authors:Wang, Can (1); Wu, Xinhui (1); Zhang, Yanqing (1); Wang, Wenjun (1)
Author affiliation:(1) College of Agricultural Engineering, Shanxi Agricultural University, Taigu; 030801, China
Corresponding author:Wu, Xinhui(wuxinhui0321@163.com)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:211-221
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">Weed control is an inevitably necessary task in field management. Effective recognition of crops and weeds has therefore been an essential basis to promote the development of intelligent weeding equipment. Nevertheless, the recognition targets are not fixed in images except for crops, due mainly to the variety of weeds and random distribution of their positions. It is highly demanded for better recognition performance to detect the crops from all categories of weeds in the image. All weed targets are required to be labeled in the dataset, where there are comprehensive all-inclusive weed species. However, human vision can only identify the target crops from the weeds. The species and quantity of weeds are still lacking in the identification. Moreover, the crops and weeds are usually overlapped in the field images with complex scenes. It is also difficult to accurately segment the boundary of various objects, especially when the generated anchor box was superimposed by a large area in deep overlapping. In this study, a recognition and semantic segmentation of maize at the seeding stage was proposed to identify the weeds on the premise of maize recognition using a dual attention network. Fine segmentation of morphological boundary was obtained. The main contents were as follows. 1) The original architecture of the model was determined to compare 6 state-of-the-art semantic segmentation networks. It was found that the architecture of the dual attention network presented the best performance for the training, validation, and testing dataset, thereby realizing the pixel-wise recognition and segmentation of maize field images. In the validation set, the mean intersection over union (mIoU) and mean pixel accuracy (mPA) at the end of iteration were 92.73% and 96.88%, respectively. In the test set, the mIoU and mPA were 92.8% and 94.66%, respectively, and the speed of segmentation was 15.2 frames/s. 2) The semantic segmentation model of maize at the seeding stage was established using the improved network architecture. The function of the model was a binary classification of maize pixels and all of the other pixels, suitable for the recognition and morphological segmentation of maize in complex field scenes at the seeding stage. The improved backbone was used to enhance the feature representation. More details of features were retained, while the amount of computation was reduced. Recurrent criss-cross and channel attention modules were combined to compose a dual attention mechanism, in order to synchronously construct long-range contextual dependencies in spatial and channel dimensions of the feature map. The discriminability of feature representation was improved significantly. The encoder-decoder structure was used to build the model, and then the auxiliary head was attached to optimize the underlying features. The loss function was improved, while the transfer learning strategy was formulated. 3) The segmentation map of weeds was obtained via image morphological processing on the segmentation map of maize at the seeding stage. The regions of weed were identified by the segmentation of maize, particularly without considering the pixel-wise prediction of the weed region. The results showed that the performance of the model was better than the original network in the whole training process. At the end of the iteration, the mIoU and mPA were 93.98% and 97.48%, increasing 1.35% and 0.62%, respectively, compared with the original network. There was an obvious increase in the accuracy of region segmentation, the accuracy of pixel recognition, and segmentation speed, indicating better comprehensive performance of the model. The mIoU and mPA of the test set were 94.16% and 95.68%, exceeding the baseline by 1.47% and 1.08%, respectively. The speed of segmentation achieved 15.9 frames/s, which was increased 4.61% compared with the original network. The finding can provide a promising reference for the development of intelligent weeding equipment, thereby accurately recognizing and segment the maize and weeds at the seeding stage in complex field scenes.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:42
Main heading:Image segmentation
Controlled terms:Complex networks - Crops - Network architecture - Pixels - Recurrent neural networks - Semantic Web - Semantics - Statistical tests - Transfer learning - Weed control
Uncontrolled terms:Attention mechanisms - Binary classification - Comprehensive performance - Feature representation - Morphological processing - Morphological segmentation - Region segmentation - Semantic segmentation
Classification code:722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications - 821.4 Agricultural Products - 903 Information Science - 922.2 Mathematical Statistics
Numerical data indexing:Percentage 1.08e+00%, Percentage 1.35e+00%, Percentage 1.47e+00%, Percentage 4.61e+00%, Percentage 6.20e-01%, Percentage 9.27e+01%, Percentage 9.28e+01%, Percentage 9.40e+01%, Percentage 9.42e+01%, Percentage 9.47e+01%, Percentage 9.57e+01%, Percentage 9.69e+01%, Percentage 9.75e+01%
DOI:10.11975/j.issn.1002-6819.2021.09.024
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.
<RECORD 35>
Accession number:20213210735499
Title:Navigation method between rows for orchard based on 3D LiDAR
Title of translation:果园行间3D LiDAR导航方法
Authors:Liu, Weihong (1, 2); He, Xiongkui (1, 2, 3, 4); Liu, Yajia (1, 2, 3, 4); Wu, Zhiming (5); Yuan, Changjian (1); Liu, Limin (1); Qi, Peng (1); Li, Tian (1)
Author affiliation:(1) Centre for Chemicals Application Technology, China Agricultural University, Beijing; 100193, China; (2) College of Engineering, China Agricultural University, Beijing; 100083, China; (3) College of Science, China Agricultural University, Beijing; 100193, China; (4) College of Agricultural Unmanned System, China Agricultural University, Beijing; 100193, China; (5) College of Agricultural Engineering, Shanxi Agricultural University, Taigu; 030801, China
Corresponding author:He, Xiongkui(xiongkui@cau.edu.cn)
Source title:Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering
Abbreviated source title:Nongye Gongcheng Xuebao
Volume:37
Issue:9
Issue date:May 2021
Publication year:2021
Pages:165-174
Language:Chinese
ISSN:10026819
CODEN:NGOXEO
Document type:Journal article (JA)
Publisher:Chinese Society of Agricultural Engineering
Abstract:<div data-language="eng" data-ev-field="abstract">The fruit industry would suffer a great shock, due mainly to the fact that its yield relies heavily on high labor inputs, but the rural population is aging with the ever-increasing development of cities in China. Autonomous production can bring an effective solution to such issues, further promoting the precision management in orchards. 3D light detection and ranging (LiDAR) sensor has made a much greater contribution to the autonomous navigation in the information acquisition for orchards, compared with the traditional 2D laser scanner. Specifically, LiDAR is a commonly-used remote sensing technique, where a laser is used to measure the distance to an illuminated target. In this study, an inter-row robot navigation was thus proposed in an orchard using 3D LiDAR. The complex three-dimensional scene was treated effectively, particularly with the dense canopy and trunks occluded by branches. A 3D LiDAR detection device was used to collect the environment information at first, and a pass-through filter was then used to correct the region of interest, where the noise was removed from the positioning task. Euclidean clustering was used to recognize the fruit trees around the robot, assuming that the tree branches were subjected to the normal distribution in the vertical direction. Body centers of trees were equivalent to the position of trees. Random sampling consensus and the least square were selected to fit the tree data using the parallelism between the tree rows. A complementary fusion was also put forward to combine two fittings. The centerline between tree rows was calculated and then treated as the target navigation line. In addition, a pure pursuit algorithm was refined using the differential chassis, considering the looking-forward distance and heading deviation. The validation experiments were carried out in a simulated hedgerow orchard and a real pear orchard. It was found that the tree rows successfully fitted with great ability to resist the interference from the environment in the first scenery. The heading positioning deviation was within 1.65°, and the lateral deviation was within 6.1 cm, when the robot walked along the centerline at a speed of 0.33 m/s. The tracking system automatically followed the centerline with a speed of 0.43 m/s, with an absolute lateral deviation of within 15cm. In the second scenery, the tracking system followed the centerline with two speeds of 0.68 and 1.35 m/s, where the absolute lateral deviations were not beyond 21.3 cm and 22.1 cm, respectively. The tracking system can be expected to serve as the automatic navigation with good robustness in standard orchards, including the hedgerow or complex three-dimensional orchard.<br/></div> © 2021, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
Number of references:32
Main heading:Optical radar
Controlled terms:Fruits - Image segmentation - Navigation - Navigation systems - Normal distribution - Orchards - Remote sensing - Robots - Tracking (position) - Trees (mathematics) - Wooden fences
Uncontrolled terms:Automatic navigation - Autonomous navigation - Environment information - Information acquisitions - Light detection and ranging - Pure-pursuit algorithms - Remote sensing techniques - Three-dimensional scenes
Classification code:415.3 Wood Structural Materials - 716.2 Radar Systems and Equipment - 731.5 Robotics - 821.3 Agricultural Methods - 821.4 Agricultural Products - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory - 922.1 Probability Theory
Numerical data indexing:Size 1.50e-01m, Size 2.13e-01m, Size 2.21e-01m, Size 6.10e-02m, Velocity 1.35e+00m/s, Velocity 3.30e-01m/s, Velocity 4.30e-01m/s, Velocity 6.80e-01m/s
DOI:10.11975/j.issn.1002-6819.2021.09.019
Database:Compendex
Compilation and indexing terms, Copyright 2022 Elsevier Inc.