摘要

Objective Image super-resolution (SR) is a sort of procedure that aims to reconstruct high resolution (HR) images from a given single or a set of low resolution (LR) images. This cost effective medical-related technique can improve the spatial resolution of images in terms of image processing algorithms. However, most of medical image super-resolution methods are focused on a single-modal super-resolution design. Current magnetic resonance imaging based (MRI-based) clinical applications, Multiple modalities are obtained by different parameter settings. In this case, a single modality super-resolution method cannot take advantage of the correlation information amongst multiple modalities, which limits the super-resolution capability. In addition, most existing deep learning based (DL-based) super-resolution models have constrained of a number of trainable parameters, higher computational cost and memory storage in practice. To strengthen multi-modalities correlation information for reconstruction, our research is focused on a lightweight DL model (i. e., residual dense attention network) for multi-modal MR image super-resolution. Method A residual dense attention network is developed for multi-modal MR image super-resolution. Our network is composed of three parts: 1) shallow feature extraction, 2) feature refinement and 3) image reconstruction. Two of multi-modal MR images are input to the network after stacking. First, a 3 × 3 convolutional layer in the shallow feature extraction part is used to extract the initial feature maps in the low-resolution space. Next, the feature refinement part is mainly composed of several residual dense attention blocks. Each residual dense attention block consists of a residual dense block and an efficient channel attention module. Third, dense connection and local residual learning are adopted to improve the representation capability of the network in the residual dense block. The efficient channel attention module is facilitated the network to adaptively identify the feature maps that are more crucial for reconstruction. The outputs of all residual dense attention modules are stacked together and fed into two convolutional layers. These convolution layers are employed to reduce the channels of the feature maps as well as for feature fusion. After that, a global residual learning strategy is implemented to optimize the information flow further. The initial feature maps are added to the last layer through a skip connection. Finally, the obtained low-resolution feature maps in the image reconstruction part are up-scaled to the high-resolution space by a sub-pixel convolutional layer. Additionally, two symmetric branches are used to reconstruct the super-resolution results of the different modalities. To reconstruct the residual maps of the two modalities, each branch of them consists of two 3 × 3 convolutional layers. To obtain the final super-resolution results, the residual maps are linked to the interpolated low-resolution images. The popular L1 loss is used to optimize the network parameters. Result In the experiments, to verify the effectiveness of the proposed method, the MR images of two modalities (i. e., T1-weighted and T2-weighted) from the medical image computing and computer assisted intervention (MICCAI) brain tumor segmentation (BraTS) 2019 are adopted. The original MRI scans are split and segmented into a training set, a validation set and a testing set. To verify the effect of the multi-modal image super-resolution manner and the efficient channel attention module, two sets of ablation experiments are designed. The results show that these two of components can optimize the super-resolution performance more. Furthermore, eight representative image super-resolution methods are used for comparative analysis of performance in the experiments. Experimental results demonstrate that our method can improve these reference methods in terms of both of the objective evaluation and visual quality. Specifically, our method can obtain more competitive results as mentioned below: 1) when the up-scale factor is 2, the peak signal to noise ratio (PSNR) of the T1-weighted and T2-weighted modalities improve by 0.109 8 dB and 0.415 5 dB, respectively; 2) when the up-scale factor is 3, the PSNR of the T2-weighted modality improves by 0. 295 9 dB while the T1-weight modality decreases by 0. 064 6 dB; 3) when the up-scale factor is 4, the PSNR of the T1-weighted and T2-weighted modalities improve by 0. 269 3 dB and 0. 042 9 dB, respectively. It is worth noting that our network has a more than 10 times reduction in terms of network parameters compared to the popular reference method. Conclusion The correlation information of different modalities between MR images is beneficial to image super-resolution. Our multi-modal MR image super-resolution method can achieve high-quality super-resolution results of two modalities simultaneously in an integrated correlation information-based network. It can obtain more competitive performance than the state-of-the-art super-resolution methods with a relative lightweight model. ? 2023 Editorial and Publishing Board of JIG.

全文