摘要
Traditional video compression coding methods are widely used. In order to further improve the compression performance, research on deep learning-based video compression coding methods has received increasing attention. Existing deep learning video compression coding methods realize motion compensation based on optical flow, which will produce artifacts during the optical flow alignment process, reducing the accuracy of prediction. This paper proposed a motion estimation idea in the deep feature domain, and designed a corresponding neural network to extract motion information in the deep feature domain. On this basis, it proposed a multi-layer multi-hypothesis prediction motion compensation network. By using the multi-hypothesis prediction module in the deep feature domain, the shallow feature domain and the pixel domain, the accuracy of motion compensation was improved, thereby improving the overall rate-distortion performance. Simulation results show that the inter-frame prediction results of the algorithm in the paper mitigate artifacts and the visual effect is significantly better than optical flow alignment. At the same time, the proposed algorithm achieves better rate-distortion performance compared with traditional H.264 and H.265 methods and single-frame reference methods DVC and DVCpro based on deep learning. Compared with the DCVC method at the forefront of research, the algorithm reduces the coding time by approximately 26.8% while the rate distortion performance is similar. Taking the H.264 encoding result as the benchmark, under the condition of the same bit rate, the decoding quality was improved by 3.73 dB, 4.76 dB and 2.65 dB on HEVC test sequences ClassB, ClassD and ClassE. The simulation experiment results show that, when compressing and coding video sequences, the algorithm proposed in the paper can improve the accuracy of motion compensation prediction frames, reduce the prediction error, shortens the residual signal compression coding code stream and improve the overall rate distortion performance. ? 2022, Editorial Department, Journal of South China University of Technology. All right reserved.
- 单位