TY - JOUR
T1 - UnstrPrompt
T2 - Large Language Model Prompt for Driving in Unstructured Scenarios
AU - Li, Yuchen
AU - Li, Luxi
AU - Wu, Zizhang
AU - Bing, Zhenshan
AU - Xuanyuan, Zhe
AU - Knoll, Alois Christian
AU - Chen, Long
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2024
Y1 - 2024
N2 - The integration of language descriptions or prompts with Large Language Models (LLMs) into visual tasks is currently a focal point in the advancement of autonomous driving. This study has showcased notable advancements across various standard datasets. Nevertheless, the progress in integrating language prompts faces challenges in unstructured scenarios, primarily due to the limited availability of paired data. To address this challenge, we introduce a groundbreaking language prompt set called 'UnstrPrompt.' This prompt set is derived from three prominent unstructured autonomous driving datasets: IDD, ORFD, and AutoMine, collectively comprising a total of 6K language descriptions. In response to the distinctive features of unstructured scenarios, we have developed a structured approach for prompt generation, encompassing three key components: scene, road, and instance. Additionally, we provide a detailed overview of the language generation process and the validation procedures. We conduct tests on segmentation tasks, and our experiments have demonstrated that text-image fusion can improve accuracy by more than 3% on unstructured data. Additionally, our description architecture outperforms the generic urban architecture by more than 0.1%. This work holds the potential to advance various aspects such as interaction and foundational models in this scenario.
AB - The integration of language descriptions or prompts with Large Language Models (LLMs) into visual tasks is currently a focal point in the advancement of autonomous driving. This study has showcased notable advancements across various standard datasets. Nevertheless, the progress in integrating language prompts faces challenges in unstructured scenarios, primarily due to the limited availability of paired data. To address this challenge, we introduce a groundbreaking language prompt set called 'UnstrPrompt.' This prompt set is derived from three prominent unstructured autonomous driving datasets: IDD, ORFD, and AutoMine, collectively comprising a total of 6K language descriptions. In response to the distinctive features of unstructured scenarios, we have developed a structured approach for prompt generation, encompassing three key components: scene, road, and instance. Additionally, we provide a detailed overview of the language generation process and the validation procedures. We conduct tests on segmentation tasks, and our experiments have demonstrated that text-image fusion can improve accuracy by more than 3% on unstructured data. Additionally, our description architecture outperforms the generic urban architecture by more than 0.1%. This work holds the potential to advance various aspects such as interaction and foundational models in this scenario.
KW - Large language model
KW - UnstrPrompt
KW - segmentation
KW - unstructured scenarios
UR - http://www.scopus.com/inward/record.url?scp=85186110288&partnerID=8YFLogxK
U2 - 10.1109/JRFID.2024.3367975
DO - 10.1109/JRFID.2024.3367975
M3 - Article
AN - SCOPUS:85186110288
SN - 2469-7281
VL - 8
SP - 367
EP - 375
JO - IEEE Journal of Radio Frequency Identification
JF - IEEE Journal of Radio Frequency Identification
ER -