Abstract:
Abstract: In recent years, the Somatosensory Technology has been applied in many fields including entertainment, education, automation and medicine etc. But in agriculture, it still has rarely involved. The traditional human-computer interaction system of virtual plant operating at a particular operating system or on a mobile platform, and interactive mode is interactive through the mouse and keyboard, need parameters and commands more cumbersome user input, resulting in the lack of good user interaction experience. In view of the above situation, in this paper, we designed and developed virtual farming object interaction system based on cloud computing and somatosensory interactive technology. The system firstly generated a 3D model of the virtual crop in the cloud, and the model was stored in the cloud. Our virtual crop model included rice and tomato. The cloud-side provided the data calculation ability and responded the browser requesting, the browser-side was responsible for display, caching and a small amount of calculation, and Leap Motion was responsible for interaction on the browser-side. In order to obtain the relevant parameters for rice modeling, we had done experiments in China National Rice Research Institute in Zhejiang, Hangzhou between 2015 and 2016. The selected rice stage was from the jointing stage to the heading stage. For each plant, we measured three blades in different leaf positions, which included blade lengths, blade widths, the changed widths along the blades, and blade growth positions. The 3D data of virtual crops needed to be generated by algorithms on Amazon cloud platform. The topological structures of tomato plants were described by the parametric L-system in our system, and we separated the structures into stems, rachis, blades, fruit branches and flower branches. Using WebGL to render 3D crop models on browser allowed and incorporated users to directly interact with it. In this paper, we defined a 3D virtual crop data exchange protocol, which supported crop species selection, morphological changes, crop growth, pause, shrinkage, and shadow generations. When the Leap Motion received a gesture signal, users can select crop specie, and then the server would generate the corresponding 3D crop data, including geometry, color and texture information, and returned it to browser in JSON format. We built a one-hand and a two-hand gesture base, and each gesture can realize the free interaction of 3D models. Users can freely switch between one-hand mode and two-hand mode. The interactive system of virtual crops based on Motion Leap was composed of the server and the browser. It achieved the translation, rotation, plant growth, morphological changes and 3D model update operation. Our hardware environment was: 1) quad-core Intel CPU 2.4 GHz, 8 G of RAM on the server-side; 2) Windows 7, quad-core CPU 2.6 GHz, 4 G of RAM, 2 GB NVIDIA GeForce GTX 860 M+Intel HD 4 600 on the browser-side. The system had been implemented in WebGL and JavaScript and effectively running on the major browsers including Firefox, Chrome and IE. Experimental results showed that the users can control the shape and growth of the virtual crop model in real time by using different gestures in the browser, from which they had a good interactive experience. The establishment of the system can provide a technical reference for similar research.