Abstract

The integration of language descriptions or prompts with Large Language Models (LLMs) into visual tasks is currently a focal point in the advancement of autonomous driving. This study has showcased notable advancements across various standard datasets. Nevertheless, the progress in integrating language prompts faces challenges in unstructured scenarios, primarily due to the limited availability of paired data. To address this challenge, we introduce a groundbreaking language prompt set called 'UnstrPrompt.' This prompt set is derived from three prominent unstructured autonomous driving datasets: IDD, ORFD, and AutoMine, collectively comprising a total of 6K language descriptions. In response to the distinctive features of unstructured scenarios, we have developed a structured approach for prompt generation, encompassing three key components: scene, road, and instance. Additionally, we provide a detailed overview of the language generation process and the validation procedures. We conduct tests on segmentation tasks, and our experiments have demonstrated that text-image fusion can improve accuracy by more than 3% on unstructured data. Additionally, our description architecture outperforms the generic urban architecture by more than 0.1%. This work holds the potential to advance various aspects such as interaction and foundational models in this scenario.

OriginalspracheEnglisch
Seiten (von - bis)367-375
Seitenumfang9
FachzeitschriftIEEE Journal of Radio Frequency Identification
Jahrgang8
DOIs
PublikationsstatusVeröffentlicht - 2024

Fingerprint

Untersuchen Sie die Forschungsthemen von „UnstrPrompt: Large Language Model Prompt for Driving in Unstructured Scenarios“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren