论文:2021,Vol:39,Issue(4):930-936
引用本文:
朱亚辉, 高逦. 基于复合分解与直觉模糊集的红外与可见光图像融合方法[J]. 西北工业大学学报
ZHU Yahui, GAO Li. Infrared and visible image fusion method based on compound decomposition and intuitionistic fuzzy set[J]. Northwestern polytechnical university

基于复合分解与直觉模糊集的红外与可见光图像融合方法
朱亚辉1, 高逦2
1. 陕西学前师范学院 数学与统计学院, 陕西 西安 710100;
2. 西北工业大学 计算机学院, 陕西 西安 710072
摘要:
针对传统的基于多尺度变换的红外与可见光图像融合算法的不足,提出一种基于复合分解与直觉模糊集的红外与可见光图像融合方法。采用NSCT将源图像分解为低频子带和高频子带,进一步采用潜在低秩表示模型将低频子带分解为低频基础子带和低频显著子带;针对低频基础子带、低频显著子带和高频子带的特征,采用不同的融合规则,其中,低频基础子带以视觉显著度为权重系数采用加权求和作为融合规则,低频显著子带以绝对值最大为融合规则,高频子带以直觉模糊熵最大选择为融合规则;通过NSCT逆变换得到红外与可见光融合图像。通过对比多组融合图像主、客观评价结果表明,该方法能有效保留边缘信息,保留较多的源图像信息,在视觉质量和客观评价方法优于其他图像融合方法。
关键词:    图像融合    非下采样轮廓波变换    潜在低秩表示    直觉模糊集   
Infrared and visible image fusion method based on compound decomposition and intuitionistic fuzzy set
ZHU Yahui1, GAO Li2
1. School of Mathematics and Statistics, Shaanxi Xueqian Normal University, Xi'an 710100, China;
2. School of Computer Science, Northwestern Polytechnical University, Xi'an 710072, China
Abstract:
To overcome the shortcomings of traditional image fusion algorithms based on multiscale transform, an infrared and visible image fusion method based on compound decomposition and intuitionistic fuzzy set is proposed. Firstly, the non-subsampled contour transform is used to decompose the source image into low-frequency coefficients and high-frequency coefficients. Then the potential low-rank representation model is used to decompose low-frequency coefficients into basic sub-bands and salient sub-bands, in which the visual saliency map is taken as weighted coefficient. The weighted summation of low-frequency basic sub-bands is used as the fusion rule. The maximum absolute value of low-frequency salient sub-bands is also used as the fusion rule. The two fusion rules are superimposed to obtain low-frequency fusion coefficients. The intuitionistic fuzzy entropy is used as the fusion rule to measure the texture information and edge information of high-frequency coefficients. Finally, the infrared visible fusion image is obtained with the non-subsampled contour inverse transform. The comparison results on the objective and subjective evaluation of several sets of fusion images show that our image fusion method can effectively keep edge information and rich information on source images, thus producing better visual quality and objective evaluation than other image fusion methods.
Key words:    image fusion    non-subsampled contour transform    potential low-rank representation model    intuitionistic fuzzy set   
收稿日期: 2020-12-08     修回日期:
DOI: 10.1051/jnwpu/20213940930
基金项目: 陕西省教育厅科学研究计划项目(20JK0585)资助
通讯作者: 高逦(1969-),女,西北工业大学副教授,主要从事网络安全,数据处理研究。e-mail:284239257@qq.com     Email:284239257@qq.com
作者简介: 朱亚辉(1981-),女,陕西学前师范学院讲师,主要从事图像处理、决策分析研究。
相关功能
PDF(3236KB) Free
打印本文
把本文推荐给朋友
作者相关文章
朱亚辉  在本刊中的所有文章
高逦  在本刊中的所有文章

参考文献:
[1] ZHU Z Q, CHAI Y, YIN H P, et al. A novel dictionary learning approach for multi-modality medical image fusion[J]. Neurocomputing, 2016, 214:471-482
[2] WU Chunming, CHEN Long. Infrared and visible image fusion method of dual NSCT and PCNN[J]. PLOS ONE, 2020, 15(9):1-15
[3] WANG Zhishe, XU Jiawei, JIANG Xiaolin, et al. Infrared and visible image fusion via hybrid decomposition of NSCT and morphological sequential toggle operator[J]. Optik, 2020, 201:163497
[4] ZHAO Changli, ZHANG Baohui, WU Jie, et al. Fusion of infrared and visible images based on gray energy difference[J]. Infrared Technology, 2020, 42(8):775-782
[5] ZHANG Kang, HUANG Yongdong, YUAN Xia, et al. Infrared and visible image fusion based on intuitionistic fuzzy sets[J]. Infrared Physics & Technology, 2020, 105:1-7
[6] YU Shen, CHEN Xiaopeng. Infrared and visible image fusion based on a latent low-rank representation nested with multiscale geometric transform[J], IEEE Access, 2020, 8:110214-110226
[7] JIANG Zetao, JIANG Qi, HUANG Yongsong, et al. Infrared and low-light-level visible light enhancement image fusion method based on latent low-rank represent and composite filtering[J]. Acta Photonica Sinica, 2020, 49(4):0410001
[8] WANG X Z, YIN J, ZHANG K, et al. Infrared weak-small targets fusion based on latent low-rank representation and DWT[J]. IEEE Access, 2019, 7:112681-112692
[9] LIU G C, LIN Z C, YU Y. Robust subspace segmentation by low-rank representation[C]//Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 2010:663-670
[10] LIU Guangcan, YAN Shuicheng. Latent low-rank representation for subspace segmentation and feature extraction[C]//IEEE International Conference on Computer Vision, Barcelona, Spain, 2011:1615-1622
[11] MA J, ZHOU Z, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82:8-17
[12] JIANG Zetao, HE Yuting, ZHANG Shaoqin. Infrared and Low-light-level visible image fusion algorithm based on contrast enhancement and Cauchy fuzzy function[J]. Acta Photonica Sinica, 2019,48(6):0610001
[13] ZHANG Xuefeng,YAN Hui, HE Hao. Multi-focus image fusion based on fractional-order derivative and intuitionistic fuzzy sets[J]. Front Inform Technol Electron Eng, 202021(6):834-843
[14] BALASUBRAMANIAM P, ANANTHI V P. Image fusion using intuitionistic fuzzy sets[J]. Information Fusion, 2014, 20:21-30
[15] WEI Tan, ZHOU Huixin, SONG Jiangluqi, et al. Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition[J]. Applied Optics, 2019, 58(12):3064-3073
[16] XYDEAS C S, PETROVIC V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4):308-309
[17] MA K, ZENG K, WANG Z. Perceptual quality assessment for multi-exposure image fusion[J]. IEEE Trans on Image Processing, 2015, 24(11):3345-3356
[18] ZHANG Lin, ZHANG Lei, MOU Xuanqin, et al. FSIM:a feature similarity index for image quality assessment[J]. IEEE Trans on Image Processing, 2011, 20(8):2378-2386
[19] AKTAR M, MAMUN M A, HOSSAIN M A, et al. Weighted normalized mutual information based change detection in remote sensing images[C]//19th International Conference on Computer and Information Technology, Dhaka, 2016:257-260