Abstract:
Abstract: Multi-source image fusion can reduce the miscalculation caused by using single source image. Using the complement and redundancies between multi-source data fusion between complete data can improve the reliability of data. This research was done based on the wavelet decomposition reconstruction method on near ground unmanned aerial vehicle (UAV) simulation platform. Two source image, respectively multispectral image and depth image of rapeseed were acquired. Camera produced by PMD Company, an active imaging sensor which measures the time of flight of infrared light from generator to sensor was used to acquire depth image. Three kinds of image data was acquired from PMD camera namely distance image, intensity image and amplitude image. The three images were different forms of same data from sensor while the intensity image was more sensitive to the edge information in visual field. Tetracam ADC multispectral camera was used for multispectral image acquisition. The images acquired were calibrated using white board. To register the image, depth imaging principle was analyzed and intensity image was used for camera internal parameter calibration. Pinhole model was used and chessboard calibration method was used for camera internal parameters. Since the distance image can't see Harris points, intensity image was used for image calibration instead. The distance image and intensity image had same internal parameters. The internal parameters were final used for depth image correction so as to generate lens distortion -free images. SIFT method was used to find the corresponding points between two images. The first step was to find feature point descriptors, the second step was to use Mahalanobis distance to find proper matches. Match ratio of 0.6 was appropriate through tests for this purpose. After feature points were found and matched, multispectral image and depth image were resized to same zoom level then were registered and cropped to same size based on the corresponding points. To find which kind of wavelet basis fit best for this purpose, harr,Db2,Db4,Sym2,Sym4,Bior2.2,Bior2.4,Coif2,and Coif4 were used on both source image for wavelet decomposition. Each decomposed wavelet layer was fused and formed another fused tower structure for wavelet reconstruction. Five parameters were used to evaluate fusion result, namely: Cross entropy, Average Gradient, Root Mean Square Error, Peak Signal-to-Noise Ratio and Mutual Information. The results showed that the Harr and sym4 wavelet basis had the best fusion result since they reserved most spectral and texture information. To further analyze how wavelet decomposition level affected fusion result, images were decomposed to level 3, 4, 5 and 6 to determine which level fusion was better. The result showed that the 3 and 4 fusion results were better than the 5 and 6. Then distance data in depth image was normalized to generate 3D point clouds. Statistical outlier removal method was used to remove outliers from point cloud. The final point cloud describes the spatial description of multispectral data. In this research, we explored the fusion between multispectral image and depth image and studied the fusion parameters based on wavelet decomposition and reconstruction which would facilitate future use.