UnstrPrompt: Large Language Model Prompt for Driving in Unstructured Scenarios

Yuchen Li, Luxi Li, Zizhang Wu, Zhenshan Bing, Zhe Xuanyuan, Alois Christian Knoll, Long Chen

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

The integration of language descriptions or prompts with Large Language Models (LLMs) into visual tasks is currently a focal point in the advancement of autonomous driving. This study has showcased notable advancements across various standard datasets. Nevertheless, the progress in integrating language prompts faces challenges in unstructured scenarios, primarily due to the limited availability of paired data. To address this challenge, we introduce a groundbreaking language prompt set called 'UnstrPrompt.' This prompt set is derived from three prominent unstructured autonomous driving datasets: IDD, ORFD, and AutoMine, collectively comprising a total of 6K language descriptions. In response to the distinctive features of unstructured scenarios, we have developed a structured approach for prompt generation, encompassing three key components: scene, road, and instance. Additionally, we provide a detailed overview of the language generation process and the validation procedures. We conduct tests on segmentation tasks, and our experiments have demonstrated that text-image fusion can improve accuracy by more than 3% on unstructured data. Additionally, our description architecture outperforms the generic urban architecture by more than 0.1%. This work holds the potential to advance various aspects such as interaction and foundational models in this scenario.

Original languageEnglish
Pages (from-to)367-375
Number of pages9
JournalIEEE Journal of Radio Frequency Identification
Volume8
DOIs
StatePublished - 2024

Keywords

  • Large language model
  • UnstrPrompt
  • segmentation
  • unstructured scenarios

Fingerprint

Dive into the research topics of 'UnstrPrompt: Large Language Model Prompt for Driving in Unstructured Scenarios'. Together they form a unique fingerprint.

Cite this