Abstract
Images captured at nighttime often face challenges such as low light and blur, primarily caused by dim environments and the frequent use of long exposure. Existing methods either handle the two types of degradations independently or rely on carefully designed priors generated by complex mechanisms, resulting in poor generalization ability and high model complexity. To address these challenges, we propose an end-to-end framework named LIEDNet to efficiently and effectively restore high-quality images on both real-world and synthetic data. Specifically, the introduced LIEDNet consists of three essential components: the Visual State Space Module (VSSM), the Local Feature Module (LFM), and the Dual Gated-Dconv Feedforward Network (DGDFFN). The integration of VSSM and LFM enables the model to capture both global and local features while maintaining low computational overhead. Additionally, the DGDFFN improves image fidelity by extracting multi-scale structural information. Extensive experiments on real-world and synthetic datasets demonstrate the superior performance of LIEDNet in restoring low-light, blurry images. The code is available at https://github.com/MingyuLiu1/LIEDNet https://github.com/MingyuLiu1/LIEDNet.
Original language | English |
---|---|
Pages (from-to) | 6602-6615 |
Number of pages | 14 |
Journal | IEEE Transactions on Circuits and Systems for Video Technology |
Volume | 35 |
Issue number | 7 |
DOIs | |
State | Published - 2025 |
Keywords
- Image restoration
- deblurring
- lightweight network
- low-light image enhancement