Ultrasound Report Generation With Cross-Modality Feature Alignment via Unsupervised Guidance

Jun Li, Tongkun Su, Baoliang Zhao, Faqin Lv, Qiong Wang, Nassir Navab, Ying Hu, Zhongliang Jiang

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Automatic report generation has arisen as a significant research area in computer-aided diagnosis, aiming to alleviate the burden on clinicians by generating reports automatically based on medical images. In this work, we propose a novel framework for automatic ultrasound report generation, leveraging a combination of unsupervised and supervised learning methods to aid the report generation process. Our framework incorporates unsupervised learning methods to extract potential knowledge from ultrasound text reports, serving as the prior information to guide the model in aligning visual and textual features, thereby addressing the challenge of feature discrepancy. Additionally, we design a global semantic comparison mechanism to enhance the performance of generating more comprehensive and accurate medical reports. To enable the implementation of ultrasound report generation, we constructed three large-scale ultrasound image-text datasets from different organs for training and validation purposes. Extensive evaluations with other state-of-the-art approaches exhibit its superior performance across all three datasets. Code and dataset are valuable at this link.

Original languageEnglish
Pages (from-to)19-30
Number of pages12
JournalIEEE Transactions on Medical Imaging
Volume44
Issue number1
DOIs
StatePublished - 2025

Keywords

  • Ultrasound image
  • breast
  • liver
  • report generation
  • thyroid
  • transformer
  • unsupervised learning

Fingerprint

Dive into the research topics of 'Ultrasound Report Generation With Cross-Modality Feature Alignment via Unsupervised Guidance'. Together they form a unique fingerprint.

Cite this