LV-GAN: A deep learning approach for limited-view optoacoustic imaging based on hybrid datasets

Tong Lu, Tingting Chen, Feng Gao, Biao Sun, Vasilis Ntziachristos, Jiao Li

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

25 Zitate (Scopus)

Abstract

The optoacoustic imaging (OAI) methods are rapidly evolving for resolving optical contrast in medical imaging applications. In practice, measurement strategies are commonly implemented under limited-view conditions due to oversized image objectives or system design limitations. Data acquired by limited-view detection may impart artifacts and distortions in reconstructed optoacoustic (OA) images. We propose a hybrid data-driven deep learning approach based on generative adversarial network (GAN), termed as LV-GAN, to efficiently recover high quality images from limited-view OA images. Trained on both simulation and experiment data, LV-GAN is found capable of achieving high recovery accuracy even under limited detection angles less than 60°. The feasibility of LV-GAN for artifact removal in biological applications was validated by ex vivo experiments based on two different OAI systems, suggesting high potential of a ubiquitous use of LV-GAN to optimize image quality or system design for different scanners and application scenarios.

OriginalspracheEnglisch
Aufsatznummere202000325
FachzeitschriftJournal of Biophotonics
Jahrgang14
Ausgabenummer2
DOIs
PublikationsstatusVeröffentlicht - Feb. 2021

Fingerprint

Untersuchen Sie die Forschungsthemen von „LV-GAN: A deep learning approach for limited-view optoacoustic imaging based on hybrid datasets“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren