张艳超, 肖宇钊, 庄载椿, 许凯雯, 何勇. 基于小波分解的油菜多光谱图像与深度图像数据融合方法[J]. 农业工程学报, 2016, 32(16): 143-150. DOI: 10.11975/j.issn.1002-6819.2016.16.020
    引用本文: 张艳超, 肖宇钊, 庄载椿, 许凯雯, 何勇. 基于小波分解的油菜多光谱图像与深度图像数据融合方法[J]. 农业工程学报, 2016, 32(16): 143-150. DOI: 10.11975/j.issn.1002-6819.2016.16.020
    Zhang Yanchao, Xiao Yuzhao, Zhuang Zaichun, Xu Kaiwen, He Yong. Data fusion of multispectral and depth image for rape plant based on wavelet decomposition[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2016, 32(16): 143-150. DOI: 10.11975/j.issn.1002-6819.2016.16.020
    Citation: Zhang Yanchao, Xiao Yuzhao, Zhuang Zaichun, Xu Kaiwen, He Yong. Data fusion of multispectral and depth image for rape plant based on wavelet decomposition[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2016, 32(16): 143-150. DOI: 10.11975/j.issn.1002-6819.2016.16.020

    基于小波分解的油菜多光谱图像与深度图像数据融合方法

    Data fusion of multispectral and depth image for rape plant based on wavelet decomposition

    • 摘要: 将多源数据融合分析可以降低单一图像造成的误判读,利用多源数据之间的冗余部分进行配准,利用互补信息完成融合,能够提高数据的信息量和可靠性。该文利用近地面遥感模拟平台分别获取油菜的多光谱图像和深度图像,将2种图像进行配准和融合。该文分别针对多光谱图像和光程差深度图像的成像特点,进行相机内外参计算与图像矫正。采用SIFT(scale invariant feature transform)算法计算2源图像上的SIFT点,并依据关键点描述子进行匹配,之后通过关键点位置计算仿射变换矩阵对图像进行缩放、平移和旋转,从而实现变换后图像的配准。分别对harr,Db2,Db4,Sym2,Sym4,Bior2.2,Bior2.4,Coif2,Coif等9种小波基融合后的结果计算其相应的交叉熵、峰值信噪比和互信息量等5个参量进行评价,得出小波基harr和sym4融合效果较好,各项指标均衡性较好。用haar小波基对配准后图像在3、4、5、6层分解融合,通过观察得出在多光谱与深度图像融合中第3层小波分解和第4层分解的融合效果较好。最终将深度图像的高程数据归一化之后进行植株三维构建,得到三维点云并进行可视化。

       

      Abstract: Abstract: Multi-source image fusion can reduce the miscalculation caused by using single source image. Using the complement and redundancies between multi-source data fusion between complete data can improve the reliability of data. This research was done based on the wavelet decomposition reconstruction method on near ground unmanned aerial vehicle (UAV) simulation platform. Two source image, respectively multispectral image and depth image of rapeseed were acquired. Camera produced by PMD Company, an active imaging sensor which measures the time of flight of infrared light from generator to sensor was used to acquire depth image. Three kinds of image data was acquired from PMD camera namely distance image, intensity image and amplitude image. The three images were different forms of same data from sensor while the intensity image was more sensitive to the edge information in visual field. Tetracam ADC multispectral camera was used for multispectral image acquisition. The images acquired were calibrated using white board. To register the image, depth imaging principle was analyzed and intensity image was used for camera internal parameter calibration. Pinhole model was used and chessboard calibration method was used for camera internal parameters. Since the distance image can't see Harris points, intensity image was used for image calibration instead. The distance image and intensity image had same internal parameters. The internal parameters were final used for depth image correction so as to generate lens distortion -free images. SIFT method was used to find the corresponding points between two images. The first step was to find feature point descriptors, the second step was to use Mahalanobis distance to find proper matches. Match ratio of 0.6 was appropriate through tests for this purpose. After feature points were found and matched, multispectral image and depth image were resized to same zoom level then were registered and cropped to same size based on the corresponding points. To find which kind of wavelet basis fit best for this purpose, harr,Db2,Db4,Sym2,Sym4,Bior2.2,Bior2.4,Coif2,and Coif4 were used on both source image for wavelet decomposition. Each decomposed wavelet layer was fused and formed another fused tower structure for wavelet reconstruction. Five parameters were used to evaluate fusion result, namely: Cross entropy, Average Gradient, Root Mean Square Error, Peak Signal-to-Noise Ratio and Mutual Information. The results showed that the Harr and sym4 wavelet basis had the best fusion result since they reserved most spectral and texture information. To further analyze how wavelet decomposition level affected fusion result, images were decomposed to level 3, 4, 5 and 6 to determine which level fusion was better. The result showed that the 3 and 4 fusion results were better than the 5 and 6. Then distance data in depth image was normalized to generate 3D point clouds. Statistical outlier removal method was used to remove outliers from point cloud. The final point cloud describes the spatial description of multispectral data. In this research, we explored the fusion between multispectral image and depth image and studied the fusion parameters based on wavelet decomposition and reconstruction which would facilitate future use.

       

    /

    返回文章
    返回