摘要
For Synthetic Aperture Radar (SAR) images, traditional super-resolution methods heavily rely on the artificial design of visual features, and super-reconstruction algorithms based on general Convolutional Neural Network (CNN) have poor fidelity to the target edge contour and weak reconstruction ability to small targets. Aiming at the above problems, in this paper, a Dilated-Resnet CNN (DR-CNN) super-resolution model based on feature reuse, i.e., Feature Reuse Dilated-Resnet CNN (FRDR-CNN), is proposed and perceptual loss is introduced, which accurately realizes four times the semantic super-resolution of SAR images. To increase the receptive field, a DR-CNN structure is used to limit the serious loss of the feature map's resolution in the model, improving the sensitivity to tiny details. To maximize the utilization of features at different levels, the FRDR-CNN cascades the feature maps of different levels, which greatly improves the efficiency of the feature extraction module and further improves the super-resolution accuracy. With the introduction of the perceptual loss, this method has a superior performance in recovering image texture and edge information. Experimental results of the study show that the FRDR-CNN algorithm is more capable of providing small objects' super-resolution and more accurate in the visual reconstruction of contour details, compared with traditional algorithms and several popular CNN super-resolution algorithms. Objectively, the Peak Signal to Noise Ratio (PSNR) is 33.5023 dB and Structural Similarity Index (SSIM) is 0.5127, and the Edge Preservation Degreebased on the Ratio Of Average (EPD-ROA) is 0.4243 and 0.4373 in the horizontal and vertical directions, respectively.
- 单位