基于深度学习的图像融合方法综述

基于深度学习的图像融合方法综述

Adu J, Gan J H, Wang Y and Huang J. 2013. Image fusion based on nonsubsampled contourlet transform for infrared and visible light image. Infrared Physics and Technology, 61: 94-100 [DOI: 10.1016/j.infrared.2013.07.010]

Alparone L, Aiazzi B, Baronti S, Garzelli A, Nencini F and Selva M. 2008. Multispectral and panchromatic data fusion assessment without reference. Photogrammetric Engineering and Remote Sensing, 74(2): 193-200 [DOI: 10.14358/PERS.74.2.193]

Alparone L, Baronti S, Garzelli A and Nencini F. 2004. A global quality measurement of pan-sharpened multispectral imagery. IEEE Geoscience and Remote Sensing Letters, 1(4): 313-317 [DOI: 10.1109/LGRS.2004.836784]

Amin-Naji M, Aghagolzadeh A and Ezoji M. 2019. Ensemble of CNN for multi-focus image fusion. Information Fusion, 51: 201-214 [DOI: 10.1016/j.inffus.2019.02.003]

Aslantas V and Bendes E. 2015. A new image quality metric for image fusion: the sum of the correlations of differences. AEU-International Journal of Electronics and Communications, 69(12): 1890-1896 [DOI: 10.1016/j.aeue.2015.09.004]

Benzenati T, Kessentini Y and Kallel A. 2022. Pansharpening approach via two-stream detail injection based on relativistic generative adversarial networks. Expert Systems with Applications, 188: #115996 [DOI: 10.1016/j.eswa.2021.115996]

Bhatnagar G, Wu Q M J and Liu Z. 2013. Directive contrast based multimodal medical image fusion in NSCT domain. IEEE Transactions on Multimedia, 15(5): 1014-1024 [DOI: 10.1109/TMM.2013.2244870]

Cai J J and Huang B. 2021. Super-resolution-guided progressive pansharpening based on a deep convolutional neural network. IEEE Transactions on Geoscience and Remote Sensing, 59(6): 5206-5220 [DOI: 10.1109/TGRS.2020.3015878]

Cai J R, Gu S H andZhang L. 2018. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing, 27(4): 2049-2062 [DOI: 10.1109/TIP.2018.2794218]

Cao Y P, Guan D Y, Huang W L, Yang J X, Cao Y L and Qiao Y. 2019. Pedestrian detection with unsupervised multispectral feature learning using deep neural networks. Information Fusion, 46: 206-217 [DOI: 10.1016/j.inffus.2018.06.005]

Chen G Y, Wu X J and Xu T Y. 2022. Unsupervised infrared image and visible image fusion algorithm based on deep learning. Laser and Optoelectronics Progress, 59(4): #0410010

陈国洋, 吴小俊, 徐天阳. 2022. 基于深度学习的无监督红外图像与可见光图像融合算法. 激光与光电子学进展, 59(4): #0410010 [DOI: 10.3788/LOP202259.0410010]

Chen J, Li X J, Luo L B, Mei X G and Ma J Y. 2020. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Information Sciences, 508: 64-78 [DOI: 10.1016/j.ins.2019.08.066]

Chen L C, Zhu Y K, Papandreou G, Schroff F and Adam H. 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation//Proceedings of the 15th European Conference on Computer Vsion. Munich, Germany: Springer: 833-851 [ DOI: 10.1007/978-3-030-01234-2_49 http://dx.doi.org/10.1007/978-3-030-01234-2_49 ]

Chen Y and Blum R S. 2009. A new automated quality assessment algorithm for image fusion. Image and Vision Computing,27(10): 1421-1432 [DOI: 10.1016/j.imavis.2007.12.002]

Cui G M, Feng H J, Xu Z H, Li Q and Chen Y T. 2015. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications, 341: 199-209 [DOI: 10.1016/j.optcom.2014.12.032]

Cvejic N, Bull D and Canagarajah N. 2007. Region-based multimodal image fusion using ICA bases. IEEE Sensors Journal, 7(5): 743-751 [DOI: 10.1109/JSEN.2007.894926]

Deng J, Dong W, Socher R, Li L J, Li K and Li F F. 2009. ImageNet: a large-scale hierarchical image database//Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE: 248-255 [ DOI: 10.1109/CVPR.2009.5206848 http://dx.doi.org/10.1109/CVPR.2009.5206848 ]

Deng X and Dragotti P L. 2021. Deep convolutional neural network for multi-modal image restoration and fusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(10): 3333-3348 [DOI: 10.1109/TPAMI.2020.2984244]

Deng X, Zhang Y T, Xu M, Gu S H and Duan Y P. 2021. Deep coupled feedback network for joint exposure fusion and image super-resolution. IEEE Transactions on Image Processing, 30: 3098-3112 [DOI: 10.1109/TIP.2021.3058764]

Dong M L, Li W S, Liang X S and Zhang X Y. 2021. MDCNN: multispectral pansharpening based on a multiscale dilated convolutional neural network. Journal of Applied Remote Sensing, 15(3): #036516 [DOI: oi.org/10.1117/1.JRS.15.036516]

Eskicioglu A M and Fisher P S. 1995. Image quality measures and their performance. IEEE Transactions on Communications, 43(12): 2959-2965 [DOI: 10.1109/26.477498]

Fu J, Li WS, Du J and Huang Y P. 2021a. A multiscale residual pyramid attention network for medical image fusion. Biomedical Signal Processing and Control, 66: #102488 [DOI: 10.1016/j.bspc.2021.102488]

Fu Y, Wu X J and Durrani T. 2021b. Image fusion based on generative adversarial network consistent with perception. Information Fusion, 72: 110-125 [DOI: 10.1016/j.inffus.2021.02.019]

Fu Z Z, Wang X, Xu J, Zhou N and Zhao Y F. 2016. Infrared and visible images fusion based on RPCA and NSCT. Infrared Physics and Technology, 77: 114-123 [DOI: 10.1016/j.infrared.2016.05.012]

Garzelli A and Nencini F. 2009. Hypercomplex quality assessment of multi/hyperspectral images. IEEE Geoscience and Remote Sensing Letters, 6(4): 662-665 [DOI: 10.1109/LGRS.2009.2022650]

Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A and Bengio Y. 2014. Generative adversarial nets//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 2672-2680 [ DOI: 10.5555/2969033.2969125 http://dx.doi.org/10.5555/2969033.2969125 ]

Guo A J, Dian R W and Li S T. 2020. Unsupervised blur kernel learning for pansharpenin g//2020 IEEE International Geoscience and Remote Sensing Symposium. Waikoloa, USA: IEEE: 633-636 [ DOI: 10.1109/IGARSS39084.2020.9324543 http://dx.doi.org/10.1109/IGARSS39084.2020.9324543 ]

Guo X P, Nie R C, Cao J D, Zhou D M, Mei L Y and He K J. 2019. FuseGAN: learning to fuse multi-focus image via conditional generative adversarial network. IEEE Transactions on Multimedia, 21(8): 1982-1996 [DOI: 10.1109/TMM.2019.2895292]

Ha Q S, Watanabe K, Karasawa T, Ushiku Y and Harada T. 2017. MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes//Proceedings of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vancouver, Canada: IEEE: 5108-5115 [ DOI: 10.1109/IROS.2017.8206396 http://dx.doi.org/10.1109/IROS.2017.8206396 ]

Haghighat M B A, Aghagolzadeh A and Seyedarabi H. 2011. A non-reference image fusion metric based on mutual information of image features. Computers and Electrical Engineering, 37(5): 744-756 [DOI: 10.1016/j.compeleceng.2011.07.012]

Han D, Li L, Guo X J and Ma J Y. 2022. Multi-exposure image fusion via deep perceptual enhancement. Information Fusion, 79: 248-262 [DOI: 10.1016/j.inffus.2021.10.006]

Han Y, Cai Y Z, Cao Y and Xu X M. 2013. A new image fusion performance metric based on visual information fidelity. Information Fusion, 14(2): 127-135 [DOI: 10.1016/j.inffus.2011.08.002]

He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE: 770-778 [ DOI: 10.1109/CVPR.2016.90 http://dx.doi.org/10.1109/CVPR.2016.90 ]

Huang G, Liu Z, Van Der Maaten L and Weinberger K Q. 2017. Densely connected convolutional networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE: 2261-2269 [ DOI: 10.1109/CVPR.2017.243 http://dx.doi.org/10.1109/CVPR.2017.243 ]

Huang J, Le Z L, Ma Y, Fan F, Zhang H and Yang L. 2020a. MGMDcGAN: medical image fusion using multi-generator multi-discriminator conditional generative adversarial network. IEEE Access, 8: 55145-55157 [DOI: 10.1109/ACCESS.2020.2982016]

Huang J, Le Z L, Ma Y, Mei X G and Fan F. 2020b. A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Computing and Applications, 32(18): 15119-15129 [DOI: 10.1007/s00521-020-04863-1]

Huo X, Zou Y, Chen Y and Tan J Q. 2021. Dual-scale decomposition and saliency analysis based infrared and visible image fusion. Journal of Image and Graphics, 26(12): 2813-2825

霍星, 邹韵, 陈影, 檀结庆. 2021. 双尺度分解和显著性分析相结合的红外与可见光图像融合. 中国图象图形学报, 26(12): 2813-2825 [DOI: 10.11834/jig.200405]

Jagalingam P and Hegde A V. 2015. A review of quality metrics for fused image. Aquatic Procedia, 4: 133-142 [DOI: 10.1016/j.aqpro.2015.02.019]

Jia X Y, Zhu C Z, Li M, Tang W Q and Zhou W L, 2021. LLVIP: A Visible-infrared Paired Dataset for Low-light Vision//Proceedings of 2021 IEEE/CVF International Conferenceon Computer Vision Workshops. Montreal, Canada: IEEE: 3489-3497

Jian L H, Yang X M, Liu Z, Jeon G, Gao M L and Chisholm D. 2021. SEDRFuse: a symmetric encoder-decoder with residual block network for infrared and visible image fusion. IEEE Transactions on Instrumentation and Measurement, 70: 1-15 [DOI: 10.1109/TIM.2020.3022438]

Jiang X Y, Ma J Y, Xiao G B, Shao Z F and Guo X J. 2021. A review of multimodal image matching: methods and applications. Information Fusion, 73: 22-71 [DOI: 10.1016/j.inffus.2021.02.012]

Jiao J and Wu L D. 2019. Fusion of multispectral and panchromatic images via morphological filter and improved PCNN in NSST domain. Journal of Image and Graphics, 24(3): 435-446

焦姣, 吴玲达. 2019. 形态学滤波和改进PCNN的NSST域多光谱与全色图像融合. 中国图象图形学报, 24(3): 435-446 [DOI: 10.11834/jig.180399]

Jung H, Kim Y, Jang H, Ha N and Sohn K. 2020. Unsupervised deep image fusion with structure tensor representations. IEEE Transactions on Image Processing, 29: 3845-3858 [DOI: 10.1109/TIP.2020.2966075]

Lahoud F and Süsstrunk S. 2019. Zero-learning fast medical image fusion//Proceedings of the 22nd International Conference on Information Fusion. Ottawa, Canada: IEEE: 1-8 [ DOI: 10.23919/FUSION43075.2019.9011178 http://dx.doi.org/10.23919/FUSION43075.2019.9011178 ]

Lee J, Seo S and Kim M. 2021. SIPSA-Net: shift-invariant pan sharpening with moving object alignment for satellite imagery//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 10161-10169 [ DOI: 10.1109/CVPR46437.2021.01003 http://dx.doi.org/10.1109/CVPR46437.2021.01003 ]

Li H and Wu X J. 2019. DenseFuse: a fusion approach to infrared and visible images. IEEE Transactions on Image Processing, 28(5): 2614-2623 [DOI: 10.1109/TIP.2018.2887342]

Li H and Zhang L. 2018. Multi-exposure fusion with CNN features//Proceedings of the 25th International Conference on Image Processing. Athens, Greece: IEEE: 1723-1727 [ DOI: 10.1109/ICIP.2018.8451689 http://dx.doi.org/10.1109/ICIP.2018.8451689 ]

Li H, Wu X J and Durrani T. 2020a. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Transactions on Instrumentation and Measurement, 69(12): 9645-9656 [DOI: 10.1109/TIM.2020.3005230]

Li H, Wu X J and Kittler J. 2020b. MDLatLRR: a novel decomposition method for infrared and visible image fusion. IEEE Transactions on Image Processing, 29: 4733-4746 [DOI: 10.1109/TIP.2020.2975984]

Li H, Wu X J and Kittler J. 2021a. RFN-Nest: an end-to-end residual fusion network for infrared and visible images. Information Fusion, 73: 72-86 [DOI: 10.1016/j.inffus.2021.02.023]

Li H F, Cen Y L, Liu Y, Chen X and Yu Z T. 2021b. Different input resolutions and arbitrary output resolution: a meta learning-based deep framework for infrared and visible image fusion. IEEE Transactions on Image Processing, 30: 4070-4083 [DOI: 10.1109/TIP.2021.3069339]

Li J, Huo H T, Li C, Wang R H and Feng Q. 2021c. AttentionFGAN: infrared and visible image fusion using attention-based generative adversarial networks. IEEE Transactions on Multimedia, 23: 1383-1396 [DOI: 10.1109/TMM.2020.2997127]

Li J, Huo H T, Li C, Wang R H, Sui C H and Liu Z. 2021d. Multigrained attention network for infrared and visible image fusion. IEEE Transactions on Instrumentation and Measurement, 70: #5002412 [DOI: 10.1109/TIM.2020.3029360]

Li J X, Guo X B, Lu G M, Zhang B, Xu Y, Wu F and Zhang D. 2020c. DRPL: deep regression pair learning for multi-focus image fusion. IEEE Transactions on Image Processing, 29: 4816-4831 [DOI: 10.1109/TIP.2020.2976190]

Li S T, Kang X D, Fang L Y, Hu J W and Yin H T. 2017. Pixel-level image fusion: a survey of the state of the art. Information Fusion, 33: 100-112 [DOI: 10.1016/j.inffus.2016.05.004]

Li Y and Wu X J. 2014. Image fusion based on sparse representation using Shannon entropy weighting. ACTA Automatica Sinica, 40(8): 1819-1835

李奕, 吴小俊. 2014. 香农熵加权稀疏表示图像融合方法研究. 自动化学报, 40(8): 1819-1835 [DOI: 10.3724/SP.J.1004.2014.01819]

Li Z X, Liu J Y, Liu R S, Fan X, Luo Z X and Gao W. 2021e. Multiple task-oriented encoders for unified image fusion//Proceedings of 2021 IEEE International Conference on Multimedia and Expo. Shenzhen, China: IEEE: 1-6 [ DOI: 10.1109/ICME51207.2021.9428212 http://dx.doi.org/10.1109/ICME51207.2021.9428212 ]

Liang X C, Hu P Y, Zhang L G, Sun J G and Yin G S. 2019. MCFNet: multi-layer concatenation fusion network for medical images fusion. IEEE Sensors Journal, 19(16): 7107-7119 [DOI: 10.1109/jsen.2019.2913281]

Lin T Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P and Zitnick C L. 2014. Microsoft COCO: common objects in context//Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer: 740-755 [ DOI: 10.1007/978-3-319-10602-1_48 http://dx.doi.org/10.1007/978-3-319-10602-1_48 ]

Liu J Y, Fan X, Huang Z B, Wu G Y, Liu R S, Zhong W and Luo Z X. 2022a. Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 5802-5811

Liu J Y, Fan X, Jiang J, Liu R S and Luo Z X. 2022b. Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion. IEEE Transactions on Circuits and Systems for Video Technology, 32(1): 105-119 [DOI: 10.1109/TCSVT.2021.3056725]

Liu J Y, Shang J J, Liu R S and Fan X. 2022c. Attention-guided global-local adversarial learning for detail-preserving multi-exposure image fusion. IEEE Transactions on Circuits and Systems for Video Technology, 32(8): 5026-5040 [DOI: 10.1109/TCSVT.2022.3144455]

Liu Q J, Zhou H Y, Xu Q Z, Liu X Y and Wang Y H. 2021a. PSGAN: a generative adversarial network for remote sensing image pan-sharpening. IEEE Transactions on Geoscience and Remote Sensing, 59(12): 10227-10242 [DOI: 10.1109/TGRS.2020.3042974]

Liu R S, Liu J Y, Jiang Z Y, Fan X and Luo Z X. 2021b. A bilevel integrated model with data-driven layer ensemble for multi-modality image fusion. IEEE Transactions on Image Processing, 30: 1261-1274 [DOI: 10.1109/TIP.2020.3043125]

Liu R S, Liu Z, Liu J Y and Fan X. 2021c. Searching a hierarchically aggregated fusion architecture for fast multi-modality image fusion//Proceedings of the 29th ACM International Conference on Multimedia. Chengdu, China: Association for Computing Machinery: 1600-1608 [ DOI: 10.1145/3474085.3475299 http://dx.doi.org/10.1145/3474085.3475299 ]

Liu X B, Mei W B and Du H Q. 2017a. Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion. Neurocomputing, 235: 131-139 [DOI: 10.1016/j.neucom.2017.01.006]

Liu X Y, Liu Q J and Wang Y H. 2020. Remote sensing image fusion based on two-stream fusion network. Information Fusion, 55: 1-15 [DOI: 10.1016/j.inffus.2019.07.010]

Liu Y, Chen X, Cheng J and Peng H. 2017b. A medical image fusion method based on convolutional neural networks//Proceedings of the 20th International Conference on Information Fusion. Xi'an, China: IEEE: 1-7 [ DOI: 10.23919/ICIF.2017.8009769 http://dx.doi.org/10.23919/ICIF.2017.8009769 ]

Liu Y, Chen X, Peng H and Wang Z F. 2017c. Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36: 191-207 [DOI: 10.1016/j.inffus.2016.12.001]

Liu Y, Chen X, Wang Z F, Wang Z J, Ward R K and Wang X S. 2018. Deep learning for pixel-level image fusion: recent advances and future prospects. Information Fusion, 42: 158-173 [DOI: 10.1016/j.inffus.2017.10.007]

Liu Y, Chen X, Ward R K and Wang Z J. 2016. Image fusion with convolutional sparse representation. IEEE Signal Processing Letters, 23(12): 1882-1886 [DOI: 10.1109/LSP.2016.2618776]

Liu Y P, Jin J, Wang Q, Shen Y and Dong X Q. 2014. Region level based multi-focus image fusion using quaternion wavelet and normalized cut. Signal Processing, 97: 9-30 [DOI: 10.1016/j.sigpro.2013.10.010]

Long Y Z, Jia H T, Zhong Y D, Jiang Y D and Jia Y M. 2021. RXDNFuse: a aggregated residual dense network for infrared and visible image fusion. Information Fusion, 69: 128-141 [DOI: 10.1016/j.inffus.2020.11.009]

Lou J Q, Li J F and Dai W Z. 2017. Medical image fusion using non-subsampled shearlet transform. Journal of Image and Graphics, 22(11): 1574-1583

楼建强, 李俊峰, 戴文战. 2017. 非下采样剪切波变换的医学图像融合. 中国图象图形学报, 22(11): 1574-1583 [DOI: 10.11834/jig.170014]

Luo S Y, Zhou S B, Feng Y and Xie J G. 2020. Pansharpening via unsupervised convolutional neural networks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13: 4295-4310 [DOI: 10.1109/JSTARS.2020.3008047]

Luo X Q, Gao Y H, Wang A Q, Zhang Z C and Wu X J. 2021. IFSepR: a general framework for image fusion based on separate representation learning. IEEE Transactions on Multimedia [DOI: 10.1109/TMM.2021.3129354]

Ma B Y, Zhu Y, Yin X, Ban X J, Huang H Y and Mukeshimana M. 2021a. SESF-Fuse: an unsupervised deep model for multi-focus image fusion. Neural Computing and Applications, 33(11): 5793-5804 [DOI: 10.1007/s00521-020-05358-9]

Ma H Y, Liao Q M, Zhang J C, Liu S J and Xue J H. 2020a. An α-matte boundary defocus model-based cascaded network for multi-focus image fusion. IEEE Transactions on Image Processing, 29: 8668-8679 [DOI: 10.1109/TIP.2020.3018261]

Ma J L, Zhou Z Q, Wang B and Zong H. 2017. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Physics and Technology, 82: 8-17 [DOI: 10.1016/j.infrared.2017.02.005]

Ma J Y, Chen C, Li C and Huang J. 2016. Infrared and visible image fusion via gradient transfer and total variation minimization. Information Fusion, 31: 100-109 [DOI: 10.1016/j.inffus.2016.02.001]

Ma J Y, Le Z L, Tian X and Jiang J J. 2021b. SMFuse: multi-focus image fusion via self-supervised mask-optimization. IEEE Transactions on Computational Imaging, 7: 309-320 [DOI: 10.1109/TCI.2021.3063872]

Ma J Y, Liang P W, Yu W, Chen C, Guo X J, Wu J and Jiang J J. 2020b. Infrared and visible image fusion via detail preserving adversarial learning. Information Fusion, 54: 85-98 [DOI: 10.1016/j.inffus.2019.07.005]

Ma J Y, Ma Y and Li C. 2019a. Infrared and visible image fusion methods and applications: a survey. Information Fusion, 45: 153-178 [DOI: 10.1016/j.inffus.2018.02.004]

Ma J Y, Tang L F, Fan F, Huang J, Mei X G and Ma Y. 2022. SwinFusion: cross-domain long-range learning for general image fusion via Swin Transformer. IEEE/CAA Journal of Automatica Sinica, 9(7): 1200-1217 [DOI: 10.1109/JAS.2022.105686]

Ma J Y, Tang L F, Xu M L, Zhang H and Xiao G B. 2021c. STDFusionNet: an infrared and visible image fusion network based on salient target detection. IEEE Transactions on Instrumentation and Measurement, 70: 1-13 [DOI: 10.1109/TIM.2021.3075747]

Ma J Y, Xu H, Jiang J J, Mei X G and Zhang X P. 2020c. DDcGAN: adual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Transactions on Image Processing, 29: 4980-4995 [DOI: 10.1109/TIP.2020.2977573]

Ma J Y, Yu W, Chen C, Liang P W, Guo X J and Jiang J J. 2020d. Pan-GAN: an unsupervised pan-sharpening method for remote sensing image fusion. Information Fusion, 62: 110-120 [DOI: 10.1016/j.inffus.2020.04.006]

Ma J Y, Yu W, Liang P W, Li C and Jiang J J. 2019b. FusionGAN: a generative adversarial network for infrared and visible image fusion. Information Fusion, 48: 11-26 [DOI: 10.1016/j.inffus.2018.09.004]

Ma J Y, Zhang H, Shao Z F, Liang P W and Xu H. 2021d. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Transactions on Instrumentation and Measurement, 70: #5005014 [DOI: 10.1109/TIM.2020.3038013]

Ma K D, Duanmu Z F, Zhu H W, Fang Y M and Wang Z. 2020e. Deep guided learning for fast multi-exposure image fusion. IEEE Transactions on Image Processing, 29: 2808-2819 [DOI: 10.1109/TIP.2019.2952716]

Ma N, Zhou Z M, Zhang P and Luo L M. 2013. A new variational model for panchromatic and multispectral image fusion. Acta Automatica Sinica, 39(2): 179-187

马宁, 周则明, 张鹏, 罗立民. 2013. 一种新的全色与多光谱图像融合变分模型. 自动化学报, 39(2): 179-187 [DOI: 10.3724/SP.J.1004.2013.00179]

Masi G, Cozzolino D, Verdoliva L and Scarpa G. 2016. Pansharpening by convolutional neural networks. Remote Sensing, 8(7): #594 [DOI: 10.3390/rs8070594]

Mou J, Gao W and Song Z X. 2013. Image fusion based on non-negative matrix factorization and infrared feature extraction//Proceedings of the 6th International Congress on Image and Signal Processing. Hangzhou, China: IEEE: 1046-1050 [ DOI: 10.1109/CISP.2013.6745210 http://dx.doi.org/10.1109/CISP.2013.6745210 ]

Nejati M, Samavi S and Shirani S. 2015. Multi-focus image fusion using dictionary-based sparse representation. Information Fusion, 25: 72-84 [DOI: 10.1016/j.inffus.2014.10.004]

Ni J H, Shao Z M, Zhang Z Z, Hou M Z, Zhou J L, Fang L Y and Zhang Y. 2021. LDP-Net: an unsupervised pansharpening network based on learnable degradation processes [EB/OL ] . [2021-11-24 ] . https://arxiv.org/pdf/2111.12483.pdf https://arxiv.org/pdf/2111.12483.pdf

Pan Z Y, Yu M, Jiang G Y, Xu H Y, Peng Z J and Chen F. 2020. Multi-exposure high dynamic range imaging with informative content enhanced network. Neurocomputing, 386: 147-164 [DOI: 10.1016/j.neucom.2019.12.093]

Petrovic V and Xydeas C. 2005. Objective image fusion performance characterisation//Proceedings of the 10th IEEE International Conference on Computer Vision. Beijing, China: IEEE: 1866-1871 [ DOI: 10.1109/ICCV.2005.175 http://dx.doi.org/10.1109/ICCV.2005.175 ]

Prabhakar K R, Srikar V S and Babu R V. 2017. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 4724-4732 [ DOI: 10.1109/ICCV.2017.505 http://dx.doi.org/10.1109/ICCV.2017.505 ]

Qi Y, Zhou S B, Zhang Z H, Luo S Y, Lin X R, Wang L P and Qiang B H. 2021. Deep unsupervised learning based on color un-referenced loss functions for multi-exposure image fusion. Information Fusion, 66: 18-39 [DOI: 10.1016/j.inffus.2020.08.012]

Qu G H, Zhang D L and Yan P F. 2002. Information measure for performance of image fusion. Electronics Letters, 38(7): 313-315 [DOI: 10.1049/el:20020212]

Qu L H, Liu S L, Wang M N and Song Z J. 2022. TransMEF: a transformer-based multi-exposure image fusion framework using self-supervised multi-task learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2): 2126-2134 [DOI: 10.1609/aaai.v36i2.20109]

Ranchin T and Wald L. 2000. Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation. Photogrammetric Engineering and Remote Sensing, 66(1): 49-61 [DOI: 10.1117/12.748605]

Rao Y J. 1997. In-fibre Bragg grating sensors. Measurement Science and Technology, 8(4): #355 [DOI: 10.1088/0957-0233/8/4/002]

Redmon J, Divvala S, Girshick R and Farhadi A. 2016. You only look once: unified, real-time object detection//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, America: IEEE: 779-788 [ DOI: 10.1109/CVPR.2016.91 http://dx.doi.org/10.1109/CVPR.2016.91 ]

Ren S Q, He K M, Girshick R and Sun J. 2017. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6): 1137-1149 [DOI: 10.1109/TPAMI.2016.2577031]

Roberts J W, Van Aardt J A and Ahmed F B. 2008. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing, 2(1): #023522 [DOI: 10.1117/1.2945910]

Ronneberger O, Fischer P and Brox T. 2015. U-net: convolutional networks for biomedical image segmentation//Proceedings of the 18th International Conference on Medical Image Computing and Computer-assisted Intervention. Munich, Germany: Springer: 234-241 [ DOI: 10.1007/978-3-319-24574-4_28 http://dx.doi.org/10.1007/978-3-319-24574-4_28 ]

Seo S, Choi J S, Lee J, KimH H, Seo D, Jeong J and Kim M. 2020. UPSNet: unsupervised pan-sharpening network with registration learning between panchromatic and multi-spectral images. IEEE Access, 8: 201199-201217 [DOI: 10.1109/ACCESS.2020.3035802]

Tang L F, Yuan J T and Ma J Y. 2022a. Image fusion in the loop of high-level vision tasks: a semantic-aware real-time infrared and visible image fusion network. Information Fusion, 82: 28-42 [DOI: 10.1016/j.inffus.2021.12.004]

Tang L F, Yuan J T, Zhang H, Jiang X Y and Ma J Y. 2022b. PIAFusion: a progressive infrared and visible image fusion network based on illumination aware. Information Fusion, 83-84: 79-92 [DOI: 10.1016/j.inffus.2022.03.007]

Tang W, Liu Y, Cheng J, Li C and Chen X. 2021. Green fluorescent protein and phase contrast image fusion via detail preserving cross network. IEEE Transactions on Computational Imaging, 7: 584-597 [DOI: 10.1109/TCI.2021.3083965]

Tang W, Liu Y, Zhang C, Cheng J, Peng H and Chen X. 2019. Green fluorescent protein and phase-contrast image fusion via generative adversarial networks. Computational and Mathematical Methods in Medicine, 2019: #5450373 [DOI: 10.1155/2019/5450373]

Wang J M, Shao Z F, Huang X, Lu T and Zhang R Q. 2022a. A dual-path fusion network for pan-sharpening. IEEE Transactions on Geoscience and Remote Sensing, 60: 1-14 [DOI: 10.1109/TGRS.2021.3090585]

Wang J M, Shao Z F, Huang X, Lu T, Zhang R Q and Ma J Y. 2021a. Pan-sharpening via high-pass modification convolutional neural network//Proceedings of 2021 IEEE International Conference on Image Processing. Anchorage, USA: IEEE: 1714-1718 [ DOI: 10.1109/ICIP42928.2021.9506568 http://dx.doi.org/10.1109/ICIP42928.2021.9506568 ]

Wang J Z, Xu H N, Wang H F and Yu Z B. 2021. Infrared and visible image fusion based on residual dense block and auto-encoder network. Transactions of Beijing Institute of Technology, 41(10): 1077-1083

王建中, 徐浩楠, 王洪枫, 于子博. 2021. 基于残差密集块和自编码网络的红外与可见光图像融合. 北京理工大学学报, 41(10): 1077-1083 [DOI: 10.15918/j.tbit1001-0645.2021.131]

Wang K P, Zheng M Y, Wei H Y, Qi G Q and Li Y Y. 2020. Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors, 20(8): #2169 [DOI: 10.3390/s20082169]

Wang Y C, Xu S, Liu J M, Zhao Z X, Zhang C X and Zhang J S. 2021b. MFIF-GAN: a new generative adversarial network for multi-focus image fusion. Signal Processing: Image Communication, 96: #116295 [DOI: 10.1016/j.image.2021.116295]

Wang Y J, Xie Y Y, Wu Y Y, Liang K and Qiao J L. 2022b. An unsupervised multi-scale generative adversarial network for remote sensing image pan-sharpening//Proceedings of the 28th International Conference on Multimedia Modeling. Phu Quoc island, Vietnam: Springer: 356-368 [ DOI: 10.1007/978-3-030-98355-0_30 http://dx.doi.org/10.1007/978-3-030-98355-0_30 ]

Wang Z and Bovik A C. 2002. A universal image quality index. IEEE Signal Processing Letters, 9(3): 81-84 [DOI: 10.1109/97.995823]

Wang Z, Bovik A C, Sheikh H R and Simoncelli E P. 2004. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612 [DOI: 10.1109/TIP.2003.819861]

Wang Z, Simoncelli E P and Bovik A C. 2003. Multiscale structural similarity for imag e quality assessment//Proceedings of the 37th Asilomar Conference on Signals, Systems and Computers. Pacific Grove, USA: IEEE: 1398-1402 [ DOI: 10.1109/ACSSC.2003.1292216 http://dx.doi.org/10.1109/ACSSC.2003.1292216 ]

Xiao B, Wu H F and Bi X L. 2021a. DTMNet: a discrete tchebichef moments-based deep neural network for multi-focus image fusion//Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. Montreal, Canada: IEEE: 43-51 [ DOI: 10.1109/ICCV48922.2021.00011 http://dx.doi.org/10.1109/ICCV48922.2021.00011 ]

Xiao B, Xu B C, Bi X L and Li W S. 2021b. Global-feature encoding U-Net (GEU-Net) for multi-focus image fusion. IEEE Transactions on Image Processing, 30: 163-175 [DOI: 10.1109/TIP.2020.3033158]

Xu H, Liang P W, Yu W, Jiang J J and Ma J Y. 2019. Learning a generative model for fusing infrared and visible images via conditional generative adversarial network with dual discriminators//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao, China: AAAI Press: 3954-3960 [ DOI: 10.5555/3367471.3367591 http://dx.doi.org/10.5555/3367471.3367591 ]

Xu H and Ma J Y. 2021. EMFusion: an unsupervised enhanced medical image fusion network. Information Fusion, 76: 177-186 [DOI: 10.1016/j.inffus.2021.06.001]

Xu, H, Ma, J Y, Le, Z L, Jiang, J J and Guo, X J. 2020a. Fusiondn: a unified densely connected network for image fusion. Proceedings of the AAAI Conference on Artificial Intelligence, 34(7): 12484-12491 [DOI: 10.1609/aaai.v34i07.6936]

Xu H, Ma J Y, Jiang J J, Guo X J and Ling H B. 2022a. U2Fusion: a unified unsupervised image fusion network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1): 502-518 [DOI: 10.1109/TPAMI.2020.3012548]

Xu H, Ma J Y, Shao Z F, Zhang H, Jiang J J and Guo X J. 2021a. SDPNet: a deep network for pan-sharpening with enhanced information representation. IEEE Transactions on Geoscience and Remote Sensing, 59(5): 4120-4134 [DOI: 10.1109/TGRS.2020.3022482]

Xu H, Ma J Y, Yuan J T, Le Z L and Liu W. 2022b. RFNet: unsupervised network for mutually reinforcing multi-modal image registration and fusion//Proceedings of 2022 EEE Conference on Computer Vision and Pattern Recognition. New Orleans, USA: IEEE: 19679-19688

Xu H, Ma J Y and Zhang X P. 2020b. MEF-GAN: multi-exposure image fusion via generative adversarial networks. IEEE Transactions on Image Processing, 29: 7203-7216 [DOI: 10.1109/TIP.2020.2999855]

Xu H, Wang X Y and Ma J Y. 2021b. DRF: disentangled representation for visible and infrared image fusion. IEEE Transactions on Instrumentation and Measurement, 70: 1-13 [DOI: 10.1109/TIM.2021.3056645]

Xu H, Zhang H and Ma J Y. 2021c. Classification saliency-based rule for visible and infrared image fusion. IEEE Transactions on Computational Imaging, 7: 824-836 [DOI: 10.1109/TCI.2021.3100986]

Xu S, Ji L Z, Wang Z, LiP F, Sun K, Zhang C X and Zhang J S. 2020c. Towards reducing severe defocus spread effects for multi-focus image fusion via an optimization based strategy. IEEE Transactions on Computational Imaging, 6: 1561-1570 [DOI: 10.1109/TCI.2020.3039564]

Xu S, Wei X L, Zhang C X, Liu J M and Zhang J S. 2020d. MFFW: a new dataset for multi-focus image fusion [EB/OL ] . [2022-02-12 ] . https://arxiv.org/pdf/2002.04780.pdf https://arxiv.org/pdf/2002.04780.pdf

Xu S, Zhang J S, Zhao Z X, Sun K, Liu J M and Zhang C X. 2021d. Deep gradient projection networks for pan-sharpening//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE: 1366-1375 [ DOI: 10.1109/CVPR46437.2021.00142 http://dx.doi.org/10.1109/CVPR46437.2021.00142 ]

Xydeas C S and Petrović V. 2000. Objective image fusion performance measure. Electronics Letters, 36(4): 308-309 [DOI: 10.1049/el:20000267]

Yan X, Gilani S Z, Qin H L and Mian A. 2020. Structural similarity loss for learning to fuse multi-focus images. Sensors, 20(22): #6647 [DOI: 10.3390/s20226647]

Yang J F, Fu X Y, Hu Y W, Huang Y, Ding X H and Paisley J. 2017. PanNet: a deep network architecture for pan-sharpening//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE: 1753-1761 [ DOI: 10.1109/ICCV.2017.193 http://dx.doi.org/10.1109/ICCV.2017.193 ]

Yang P, Gao L F and Zi L L. 2021. Image fusion method of convolution sparsity and detail saliency map analysis. Journal of Image and Graphics, 26(10): 2433-2449

杨培, 高雷阜, 訾玲玲. 2021. 卷积稀疏与细节显著图解析的图像融合. 中国图象图形学报, 26(10): 2433-2449 [DOI: 10.11834/jig.200205]

Yang Y, Liu J X, Huang S Y, Wan W G, Wen W Y and Guan J W. 2021a. Infrared and visible image fusion via texture conditional generative adversarial network. IEEE Transactions on Circuits and Systems for Video Technology, 31(12): 4771-4783 [DOI: 10.1109/TCSVT.2021.3054584]

Yang Y, Nie Z P, Huang S Y, Lin P and Wu J H. 2019. Multilevel features convolutional neural network for multifocus image fusion. IEEE Transactions on Computational Imaging, 5(2): 262-273 [DOI: 10.1109/TCI.2018.2889959]

Yang Z G, Chen Y P, Le Z L and Ma Y. 2021b. GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks. Neural Computing and Applications, 33(11): 6133-6145 [DOI: 10.1007/s00521-020-05387-4]

Yin J L, Chen B H and Peng Y T. 2022. Two exposure fusion using prior- aware generative adversarial network. IEEE Transactions on Multimedia, 24: 2841-2851 [DOI: 10.1109/TMM.2021.3089324]

Yin M, Pang J Y, Wei Y Y and Duan P H. 2016. Image fusion algorithm based on nonsubsampled dual-tree complex contourlet transform and compressive sensing pulse coupled neural network. Journal of Computer-Aided Design and Computer Graphics, 28(3): 411-419

殷明, 庞纪勇, 魏远远, 段普宏. 2016. 结合NSDTCT和压缩感知PCNN的图像融合算法. 计算机辅助设计与图形学学报, 28(3): 411-419 [DOI: 10.3969/j.issn.1003-9775.2016.03.005]

Yu L X, Cui Q, Che J, Xu Y L, Zhang F and Li F. 2022. Image fusion model based on structure reparameterization method and spatial attention mechanism. Application Research of Computers, 39(5): 1573-1578, 1600

俞利新, 崔祺, 车军, 许悦雷, 张凡, 李帆. 2022. 结合结构重参数化方法与空间注意力机制的图像融合模型. 计算机应用研究, 39(5): 1573-1578, 1600 [DOI: 10.19734/j.issn.1001-3695.2021.09.0423]

Yuhas R H, Goetz A F H and Boardman J W. 1992. Discrimination among semi-arid landscape endmembers using the spectral angle mapper (SAM) algorithm [EB/OL ] . [2022-05-18 ] . https://ntrs.nasa.gov/api/citations/19940012238/downloads/19940012238.pdf https://ntrs.nasa.gov/api/citations/19940012238/downloads/19940012238.pdf

Zhang H, Le Z L, Shao Z F, Xu H and Ma J Y. 2021a. MFF-GAN: an unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Information Fusion, 66: 40-53 [DOI: 10.1016/j.inffus.2020.08.022]

Zhang H and Ma J Y. 2021a. GTP-PNet: a residual learning network based on gradient transformation prior for pansharpening. ISPRS Journal of Photogrammetry and Remote Sensing, 172: 223-239 [DOI: 10.1016/j.isprsjprs.2020.12.014]

Zhang H and Ma J Y. 2021b. SDNet: a versatile squeeze-and-decomposition network for real-time image fusion. International Journal of Computer Vision, 129(10): 2761-2785 [DOI: 10.1007/s11263-021-01501-8]

Zhang H, Ma J Y, Chen C and Tian X. 2020a. NDVI-Net: a fusion network for generating high-resolution normalized difference vegetation index in remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing, 168: 182-196 [DOI: 10.1016/j.isprsjprs.2020.08.010]

Zhang H, Xu H, Tian X, Jiang J J and Ma J Y. 2021b. Image fusion meets deep learning: a survey and perspective. Information Fusion, 76: 323-336 [DOI: 10.1016/j.inffus.2021.06.008]

Zhang H, Xu H, Xiao Y, Guo X J and Ma J Y. 2020b. Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. Proceedings of 2020 AAAI Conference on Artificial Intelligence, 34(7): 12797-12804 [DOI: 10.1609/AAAI.v34i07.6975]

Zhang H, Yuan J T, Tian X and Ma J Y. 2021c. GAN-FM: infrared and visible image fusion using gan with full-scale skip connection and dual markovian discriminators. IEEE Transactions on Computational Imaging, 7: 1134-1147 [DOI: 10.1109/TCI.2021.3119954]

Zhang Q and Maldague X. 2016. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing. Infrared Physics and Technology, 74: 11-20 [DOI: 10.1016/j.infrared.2015.11.003]

Zhang X C. 2021. Benchmarking and comparing multi-exposure image fusion algorithms. Information Fusion, 74: 111-131 [DOI: 10.1016/j.inffus.2021.02.005]

Zhang X C. 2022. Deep learning-based multi-focus image fusion: a survey and a comparative study. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9): 4819-4838 [DOI: 10.1109/TPAMI.2021.3078906]

Zhang X C, Ye P and Xiao G. 2020c. VIFB: a visible and infrared image fusion benchmark//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle, USA: IEEE: 468-478 [ DOI: 10.1109/CVPRW50498.2020.00060 http://dx.doi.org/10.1109/CVPRW50498.2020.00060 ]

Zhang Y, Liu Y, Sun P, Yan H, Zhao X L and Zhang L. 2020d. IFCNN: a general image fusion framework based on convolutional neural network. Information Fusion, 54: 99-118 [DOI: 10.1016/j.inffus.2019.07.011]

Zhang Y M and Ji Q. 2005. Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5): 699-714 [DOI: 10.1109/TPAMI.2005.93]

Zhao C, Wang T F and Lei B Y. 2021a. Medical image fusion method based on dense block and deep convolutional generative adversarial network. Neural Computing and Applications, 33(12): 6595-6610 [DOI: 10.1007/s00521-020-05421-5]

Zhao F, Zhao W D, Lu H M, Liu Y, Yao L B and Liu Y. 2021b. Depth-distilled multi-focus image fusion. IEEE Transactions on Multimedia [DOI: 10.1109/TMM.2021.3134565]

Zhou H B, Wu W, Zhang Y D, Ma J Y and Ling H B. 2021. Semantic-supervised infrared and visible image fusion via a dual-discriminator generative adversarial network. IEEE Transactions on Multimedia [DOI: 10.1109/TMM.2021.3129609]

Zhou H Y, Liu Q J, Weng D W and Wang Y H. 2022. Unsupervised cycle-consistent generative adversarial networks for pan sharpening. IEEE Transactions on Geoscience and Remote Sensing, 60: #5408814 [DOI: 10.1109/TGRS.2022.3166528]

Zhou J, Civco D L and Silander J A. 1998. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. International Journal of Remote Sensing, 19(4): 743-757 [DOI: 10.1080/014311698215973]

Zhou Y N and Yang X M. 2021. Infrared and visible image fusion based on GAN. Modern Computer, 27(16): 94-97

周祎楠, 杨晓敏. 2021. 基于GAN的红外与可见光图像融合算法. 现代计算机, 27(16): 94-97 [DOI: 10.3969/j.issn.1007-1423.2021.16.020]

Zhou Y W, Yang P L, Chen Q and Sun Q S. 2015. Pan-sharpening model based on MTF and variational method. Acta Automatica Sinica, 41(2): 342-352

周雨薇, 杨平吕, 陈强, 孙权森. 2015. 基于MTF和变分的全色与多光谱图像融合模型. 自动化学报, 41(2): 342-352 [DOI: 10.16383/j.aas.2015.c140121]

相关尊享内容

关键战罚丢点球!梅西险成阿根廷“罪人”
365bet中国大陆网址

关键战罚丢点球!梅西险成阿根廷“罪人”

📅 07-13 👑 530
3D电影专区
365bet投注官网

3D电影专区

📅 07-30 👑 180