Indexed by:
Abstract:
The problem of video demoireing is a new challenge in video restoration. Unlike image demoireing, which involves removing static and uniform patterns, video demoireing requires tackling dynamic and varied moire patterns while maintaining video details, colors, and temporal consistency. It is particularly challenging to model moire patterns for videos with camera or object motions, where separating moire from the original video content across frames is extremely difficult. Nonetheless, we observe that the spatial distribution of moire patterns is often sparse on each frame, and their long-range temporal correlation is not significant. To fully leverage this phenomenon, a sparsity-constrained spatial self-attention scheme is proposed to concentrate on removing sparse moire efficiently for each frame without being distracted by dynamic video content. The frame-wise spatial features are then correlated and aggregated via the local temporal cross-frame-attention module to produce temporal-consistent high-quality moire-free videos. The above decoupled spatial and temporal transformers constitute the Spatio-Temporal Decomposition Network, dubbed STD-Net. For evaluation, we present a large-scale video demoireing benchmark featuring various real-life scenes, camera motions, and object motions. We demonstrate that our proposed model can effectively and efficiently achieve superior performance on video demoireing and single image demoireing tasks. The proposed dataset is released at https://github.com/FZU-N/LVDM.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
ISSN: 1051-8215
Year: 2024
Issue: 9
Volume: 34
Page: 8562-8575
8 . 3 0 0
JCR@2023
CAS Journal Grade:1
Affiliated Colleges: