FlexR: Few-shot Classification with Language Embeddings for Structured Reporting of Chest X-rays

Matthias Keicher, Kamilia Zaripova, Tobias Czempiel, Kristina Mach, Ashkan Khakzar, Nassir Navab

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

2 Zitate (Scopus)

Abstract

The automation of chest X-ray reporting has garnered significant interest due to the time-consuming nature of the task. However, the clinical accuracy of free-text reports has proven challenging to quantify using natural language processing metrics, given the complexity of medical information, the variety of writing styles, and the potential for typos and inconsistencies. Structured reporting and standardized reports, on the other hand, can provide consistency and formalize the evaluation of clinical correctness. However, high-quality annotations for structured reporting are scarce. Therefore, we propose a method to predict clinical findings defined by sentences in structured reporting templates, which can be used to fill such templates. The approach involves training a contrastive language-image model using chest X-rays and related free-text radiological reports, then creating textual prompts for each structured finding and optimizing a classifier to predict clinical findings in the medical image. Results show that even with limited image-level annotations for training, the method can accomplish the structured reporting tasks of severity assessment of cardiomegaly and localizing pathologies in chest X-rays.

OriginalspracheEnglisch
Seiten (von - bis)1493-1508
Seitenumfang16
FachzeitschriftProceedings of Machine Learning Research
Jahrgang227
PublikationsstatusVeröffentlicht - 2023
Veranstaltung6th International Conference on Medical Imaging with Deep Learning, MIDL 2023 - Nashville, USA/Vereinigte Staaten
Dauer: 10 Juli 202312 Juli 2023

Fingerprint

Untersuchen Sie die Forschungsthemen von „FlexR: Few-shot Classification with Language Embeddings for Structured Reporting of Chest X-rays“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren