论文:2020,Vol:38,Issue(4):904-912
引用本文:
陈卓, 方明, 柴旭, 付飞蚺, 苑丽红. 红外与可见光图像融合的U-GAN模型[J]. 西北工业大学学报
CHEN Zhuo, FANG Ming, CHAI Xu, FU Feiran, YUAN Lihong. U-GAN Model for Infrared and Visible Images Fusion[J]. Northwestern polytechnical university

红外与可见光图像融合的U-GAN模型
陈卓1, 方明1,2, 柴旭1, 付飞蚺1, 苑丽红1
1. 长春理工大学 计算机科学技术学院, 吉林 长春 130022;
2. 长春理工大学 人工智能学院, 吉林 长春 130022
摘要:
红外与可见光图像进行融合是解决单一传感器成像不足的有效手段,目的是得到适合人眼并有利于下一步应用和处理的融合图像。为解决大部分方法特征提取不全面,细节纹理丢失及公共数据集样本较少不利于训练等问题,提出一种用于图像融合的端到端网络结构。将U-net特有的卷积结构用于图像融合,最大程度地提取并保留源图像的重要特征信息。再通过生成对抗网络得到最后的融合结果,将U-net提取的特征输入生成器与包含红外图像的鉴别器进行对抗,得到训练模型。实验结果表示,所提算法能够得到轮廓清晰、纹理突出、目标明显的融合图像,SD、SF、SSIM、AG等指标明显得到提升。
关键词:    图像融合    U-net特征提取    生成对抗网络    红外图像   
U-GAN Model for Infrared and Visible Images Fusion
CHEN Zhuo1, FANG Ming1,2, CHAI Xu1, FU Feiran1, YUAN Lihong1
1. School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China;
2. School of Artificial Intelligence, Changchun University of Science and Technology, Changchun 130022, China
Abstract:
Infrared and visible image fusion is an effective method to solve the lack of single sensor imaging. The purpose is that the fusion images are suitable for human eyes and conducive to the next application and processing. In order to solve the problems of incomplete feature extraction, loss of details, and less samples of common data sets, it is not conducive to training, an end-to-end network architecture for image fusion is proposed. U-net is introduced into image fusion, and the final fusion result is obtained by using the generative adversarial network. Through its special convolution structure, the important feature information is extracted to the maximum extent, and the sample does not need to be cut to avoid the problem of reducing the fusion accuracy, but also to improve the training speed. Then the U-net extracted feature is confronted with the discriminator containing infrared image, and the generator model is obtained. The experimental results show that the present algorithm can obtain the fusion image with clear outline, prominent texture and obvious target. SD, SF, SSIM, AG and other indicators are obviously improved.
Key words:    image fusion    U-net feature extraction    generative adversarial network    infrared image   
收稿日期: 2019-10-08     修回日期:
DOI: 10.1051/jnwpu/20203840904
基金项目: 吉林省科技发展计划(20170307002GX)和中国工程院院地合作项目(2019-JL-4-2)资助
通讯作者: 方明(1977-),长春理工大学副教授,主要从事鲁棒的图像处理、机器视觉技术研究。E-mail:fangming@cust.edu.cn     Email:fangming@cust.edu.cn
作者简介: 陈卓(1994-),长春理工大学硕士研究生,主要从事图像处理技术研究。
相关功能
PDF(5315KB) Free
打印本文
把本文推荐给朋友
作者相关文章
陈卓  在本刊中的所有文章
方明  在本刊中的所有文章
柴旭  在本刊中的所有文章
付飞蚺  在本刊中的所有文章
苑丽红  在本刊中的所有文章

参考文献:
[1] 王峰程,咏梅,李辉. 基于SHT域TAM-SCM与焦聚区域检测的图像融合算法[J]. 西北工业大学学报, 2019, 37(1):114-121 WANG Fengcheng, CHENG Yongmei, LI Hui. Image Fusion Algorithm of Focal Region Detection and TAM-SCM Based on SHT Domain[J]. Journal of Northwestern Polytechnical University, 2019, 37(1):114-121(in Chinese)
[2] MA J Y, MA Y, LI C. Infrared and Visible Image Fusion Methods and Applications:a Survey[J]. Information Fusion, 2019, 45:153-178
[3] 刘战文,冯燕,李旭,等. 一种基于NSST和字典学习的红外和可见光图像[J].西北工业大学学报, 2017, 35(3):408-413 LIU Zhanwen, FENG Yan, LI Xu, et al. A Fusion Algorithm for Infrared and Visible Images Based on Dictionary Learning and NSST[J]. Journal of Northwestern Polytechnical University, 2017, 35(3):408-413(in Chinese)
[4] 林子慧, 魏宇星, 张建林, 等. 基于显著性图的红外与可见光图像融合[J]. 红外技术, 2019, 41(7):640-645 LIN Zihui, WEI Yuxing, ZHANG Jianlin, et al. Image Fusion of Infrared and Visible Images Based on Saliency Map[J]. Infrared Technology, 2019, 41(7):640-645(in Chinese)
[5] 刘琰煜, 周冬明, 聂仁灿, 等. 低秩表示和字典学习的红外与可见光图像融合算法[J]. 云南大学学报, 2019, 41(4):689-698 LIU Yanyu, ZHOU Dongming, NIE Rencan, et al. Infrared and Visible Image Fusion Scheme Using Low Rank Representation and Dictionary Learning[J]. Journal of Yunnan University, 2019, 41(4):689-698(in Chinese)
[6] 黄福升, 蔺素珍. 基于拉普拉斯金字塔变换方法的多波段图像融合规则比较[J]. 红外技术, 2019, 41(1):64-71 HUANG Fusheng, LIN Suzhen. Comparison of Multi-Band Image Fusion Rules Based on Laplacian Pyramid Transform Method[J]. Infrared Technology, 2019, 41(1):64-71(in Chinese)
[7] 冯玉芳, 殷宏, 卢厚清, 等. 基于改进卷积神经网络的红外与可见光图像融合[J/OL].(2019-08-18)[2019-09-10]. http://www.ecice06.com/CN/10.196781/j_issn_1000-3428.0055034 FENG Yufang, YAN Hong, LU Houqing, et al. Infrared and Visible Image Fusion Based on Improved Convolutional Neural Network[J/OL].(2019-08-18)[2019-09-10]. http://www.ecice06.com/CN/10.196781/j_issn_1000-3428.0055034(in Chinese)
[8] PRABHAKAR K R, SRIKAR V S, BABU R V. DeepFuse:a Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs[C]//2017 International Conference on Computer Vision, Venice, Italy, 2017:4724-4732
[9] MA J Y, YU W, LIANG P W. FusionGAN:a Generative Adversarial Network for Infrared and Visible Image Fusion[J]. Information Fusion, 2019, 48:11-26
[10] RONNEBERGER O, FISCHER P, BROX T. U-net:Convolutional Networks for Biomedical Image Segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich German, 2015, 234-241
[11] TOET Alexander. TNO Image Fusion Dataset[EB/OL].(2014-04-26)[2019-05-01]. https://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029
[12] LIU Y, WANG Z F. Simultaneous Image Fusion and Denoising with Adaptive Sparse Representation[J]. Image Processing Let, 2015, 9(5):347-357
[13] NENCINI F, GARZELLI A, BARONTI S, et, al. Remote Sensing Image Fusion Using the Curvelet Transform[J]. Information Fusion, 2007, 8(2):143-156
[14] LEWIS J J, O'CALLAGHAN R J, NIKOLOV S G, et,al. Pixel-and Region-Based Image Fusion with Complex Wavelets[J]. Information Fusion, 2007, 8(2):119-130
[15] BAVIRISETTI D P, XIAO G, LIU G. Multi-Sensor Image Fusion Based on Fourth Order Partial Differential Equations[C]//Proceedings of the International Conference on Information Fusion, 2017:1-9
[16] LI S, KANG X D, HU J W. Image Fusion with Guided Filtering[J]. IEEE Trans on Image Processing, 2013, 22(7):2864-2875
[17] KONG W, ZHANG L, LEI Y. Novel Fusion Method for Visible Light and Infrared Images Based on NSST-SF-PCNN[J]. Infrared Physics & Technology, 2014, 65:103-112
[18] BAVIRISETTI D P, DHULI R. Two-Scale Image Fusion of Visible and Infrared Images Using Saliency Detection[J]. Infrared Physics & Technology, 2016, 76:52-64
[19] MA J, CHEN C, LI C. Infrared and Visible Image Fusion via Gradient Transfer and Total Variation Minimization[J]. Inform Fusion, 2016, 31:100-109
相关文献:
1.夏余, 曲仕茹, 李珣.基于改进2D-2DPCA的图像融合算法[J]. 西北工业大学学报, 2014,32(3): 340-405