LIEDNet: A Lightweight Network for Low-Light Enhancement and Deblurring

Mingyu Liu, Yuning Cui, Wenqi Ren, Juxiang Zhou, Alois C. Knoll

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Images captured at nighttime often face challenges such as low light and blur, primarily caused by dim environments and the frequent use of long exposure. Existing methods either handle the two types of degradations independently or rely on carefully designed priors generated by complex mechanisms, resulting in poor generalization ability and high model complexity. To address these challenges, we propose an end-to-end framework named LIEDNet to efficiently and effectively restore high-quality images on both real-world and synthetic data. Specifically, the introduced LIEDNet consists of three essential components: the Visual State Space Module (VSSM), the Local Feature Module (LFM), and the Dual Gated-Dconv Feedforward Network (DGDFFN). The integration of VSSM and LFM enables the model to capture both global and local features while maintaining low computational overhead. Additionally, the DGDFFN improves image fidelity by extracting multi-scale structural information. Extensive experiments on real-world and synthetic datasets demonstrate the superior performance of LIEDNet in restoring low-light, blurry images. The code is available at https://github.com/MingyuLiu1/LIEDNet https://github.com/MingyuLiu1/LIEDNet.

Original languageEnglish
Pages (from-to)6602-6615
Number of pages14
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume35
Issue number7
DOIs
StatePublished - 2025

Keywords

  • Image restoration
  • deblurring
  • lightweight network
  • low-light image enhancement

Fingerprint

Dive into the research topics of 'LIEDNet: A Lightweight Network for Low-Light Enhancement and Deblurring'. Together they form a unique fingerprint.

Cite this