论文:2021,Vol:39,Issue(4):901-908
引用本文:
杨振舰, 尚佳美, 张众维, 张艳, 刘树东. 基于残差注意力机制的图像去雾算法[J]. 西北工业大学学报
YANG Zhenjian, SHANG Jiamei, ZHANG Zhongwei, ZHANG Yan, LIU Shudong. A new end-to-end image dehazing algorithm based on residual attention mechanism[J]. Northwestern polytechnical university

基于残差注意力机制的图像去雾算法
杨振舰, 尚佳美, 张众维, 张艳, 刘树东
天津城建大学 计算机与信息工程学院, 天津 300384
摘要:
传统基于先验知识与基于学习的图像去雾算法依赖大气散射模型,容易出现颜色失真和去雾不彻底的现象。针对上述问题,提出一种端到端的基于残差注意力机制的图像去雾算法,该算法网络包括编码、多尺度特征提取、特征融合和解码4个模块。编码模块将输入的雾图编码为特征图像,便于后续特征提取并减少内存占用;多尺度特征提取模块包括残差平滑空洞卷积模块、残差块和高效通道注意力机制,能够扩大感受野并通过加权筛选提取的不同尺度特征以便融合;特征融合模块利用高效通道注意力机制,动态调整不同尺度特征的通道权重,学习丰富的上下文信息并抑制冗余信息,增强网络提取雾霾密度图像的能力从而使去雾更加彻底;解码模块对融合后的特征进行非线性映射得到雾霾密度图像,进而恢复无雾图像。通过在SOTS测试集和自然有雾图像上进行定量和定性的测试,所提方法取得了较好的客观和主观评价结果,并有效改善了颜色失真和去雾不彻底的现象。
关键词:    图像去雾    深度学习    通道注意力机制    残差平滑空洞卷积    特征提取   
A new end-to-end image dehazing algorithm based on residual attention mechanism
YANG Zhenjian, SHANG Jiamei, ZHANG Zhongwei, ZHANG Yan, LIU Shudong
School of Computer and Information Engineering, Tianjin Chengjian University, Tianjin 300384, China
Abstract:
Traditional image dehazing algorithms based on prior knowledge and deep learning rely on the atmospheric scattering model and are easy to cause color distortion and incomplete dehazing. To solve these problems, an end-to-end image dehazing algorithm based on residual attention mechanism is proposed in this paper. The network includes four modules:encoder, multi-scale feature extraction, feature fusion and decoder. The encoder module encodes the input haze image into feature map, which is convenient for subsequent feature extraction and reduces memory consumption; the multi-scale feature extraction module includes residual smoothed dilated convolution module, residual block and efficient channel attention, which can expand the receptive field and extract different scale features by filtering and weighting; the feature fusion module with efficient channel attention adjusts the channel weight dynamically, acquires rich context information and suppresses redundant information so as to enhance the ability to extract haze density image of the network; finally, the encoder module maps the fused feature nonlinearly to obtain the haze density image and then restores the haze free image. The qualitative and quantitative tests based on SOTS test set and natural haze images show good objective and subjective evaluation results. This algorithm improves the problems of color distortion and incomplete dehazing effectively.
Key words:    image dehazing    deep learning    channel attention mechanism    residual smoothed dilated convolution    feature extraction   
收稿日期: 2020-12-17     修回日期:
DOI: 10.1051/jnwpu/20213940901
基金项目: 重点实验室开放课题(2019 LODTS006)与天津市教委科研计划项目(2017KJ059)资助
通讯作者: 张众维(1986-),天津城建大学讲师,主要从事图像处理和模式识别研究。e-mail:gucaszzw@163.com     Email:gucaszzw@163.com
作者简介: 杨振舰(1975-),天津城建大学教授,主要从事图像处理和计算机应用研究。
相关功能
PDF(3425KB) Free
打印本文
把本文推荐给朋友
作者相关文章
杨振舰  在本刊中的所有文章
尚佳美  在本刊中的所有文章
张众维  在本刊中的所有文章
张艳  在本刊中的所有文章
刘树东  在本刊中的所有文章

参考文献:
[1] 吴迪, 朱青松. 图像去雾的最新研究进展[J]. 自动化学报, 2015, 41(2):221-239 WU Di, ZHU Qingsong. The latest research progress of image dehazing[J]. Acta Automatica Sinica, 2015, 41(2):221-239(in Chinese)
[2] STARK J A. Adaptive image contrast enhancement using generalizations of histogram equalization[J]. IEEE Trans on Image Processing, 2000, 9(5):889-896
[3] COOPER T J, BAQAI F A. Analysis and extensions of the Frankle-Mccann retinex algorithm[J]. Journal of Electronic Imaging, 2004,13(1):85-92
[4] HE K, SUN J, FELLOW, et al. Single image haze removal using dark channel prior[J]. IEEE Trans on Pattern Analysis & Machine Intelligence, 2011, 33(12):2341-2353
[5] ZHU Q, MAI J, SHAO L. A Fast single image haze removal algorithm using color attenuation prior[J]. IEEE Trans on Image Processing, 2015, 24(11):3522-3533
[6] CAI B, XU X, JIA K, et al. Dehaze net:an end-to-end system for single image haze removal[J]. IEEE Trans on Image Processing, 2016, 25(11):5187-5198
[7] REN W, LIU S, ZHANG H, et al. Single image dehazing via multi-scale convolutional neural networks[C]//European Conference on Computer Vision, 2016:154-169
[8] LI B, PENG X, WANG Z, et al. AOD-net:all-in-one dehazing network[C]//2017 IEEE International Conference on Computer Vision, 2017:4770-4778
[9] LI R, PAN J, LI Z, et al. Single image dehazing via conditional generative adversarial network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:8202-8211
[10] CHEN D, HE M, FAN Q, et al. Gated context aggregation network for image dehazing and deraining[C]//2019 IEEE Winter Conference on Applications of Computer Vision, 2019:1375-1383
[11] QU Y, CHEN Y, HUANG J, et al. Enhanced pix2pix dehazing network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019:8160-8168
[12] SHAO Y, LI L, REN W, et al. Domain adaptation for image dehazing[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:2808-2817
[13] TIAN C, XU Y, LI Z, et al. Attention-guided CNN for image denoising[J]. Neural Networks, 2020, 124:117-129
[14] CHEN Y, LIU L, PHONEVILAY V, et al. Image super-resolution reconstruction based on feature map attention mechanism[J]. Applied Intelligence, 2021, 51(8):1-14
[15] LIAO Y, SU Z, LIANG X, et al. HDP-net:haze density prediction network for nighttime dehazing[C]//Pacific Rim Conference on Multimedia, 2018:469-480
[16] RONNEBERGER O, FISCHER P, BROX T. U-net:convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015:234-241
[17] WANG Z, JI S. Smoothed dilated convolutions for improved dense prediction[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018:2486-2495
[18] WANG Q, WU B, ZHU P, et al. ECA-net:Efficient channel attention for deep convolutional neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:11534-11542
[19] LI B, REN W, FU D, et al. Benchmarking single-image dehazing and beyond[J]. IEEE Trans on Image Processing, 2018, 28(1):492-505