In this paper, we aim to solve the general multi-modal image restoration and fusion problems, by proposing a deep convolutional neural network named the Common and Unique information splitting network (CU-Net). To the best of our knowledge, this is the ﬁrst time a universal framework is proposed to solve both the MIR and MIF problems. Compared with other empirically designed networks, the proposed CU-Net is derived from a new multimodal convolutional sparse coding (MCSC) model,and thus each part of the network has good interpretability.
Our method can be applied to various multi-modal image restoration and fusion tasks, as the following figure shows.
The CU-Net architecture is as follows.
Some numerical results compared with other SOTA methods on three MIR tasks.
Guided modality (RGB image) Target modality (Depth image) 4X upscaling
Guided modality (RGB image) Target modality (Multi-spectral image) 4X upscaling
Guided modality (Flash image) Target modality (Non-flash image) sigma=75
Under-exposed image Our fused image Over-exposed image
Far-focused image Our fused image Near-focused image