TY - GEN
T1 - Unblind Text Inputs
T2 - 2024 CHI Conference on Human Factors in Computing Sytems, CHI 2024
AU - Liu, Zhe
AU - Chen, Chunyang
AU - Wang, Junjie
AU - Chen, Mengzhuo
AU - Wu, Boyu
AU - Huang, Yuekai
AU - Hu, Jun
AU - Wang, Qing
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s)
PY - 2024/5/11
Y1 - 2024/5/11
N2 - Mobile apps have become indispensable for accessing and participating in various environments, especially for low-vision users. Users with visual impairments can use screen readers to read the content of each screen and understand the content that needs to be operated. Screen readers need to read the hint-text attribute in the text input component to remind visually impaired users what to fill in. Unfortunately, based on our analysis of 4,501 Android apps with text inputs, over 76% of them are missing hint-text. These issues are mostly caused by developers' lack of awareness when considering visually impaired individuals. To overcome these challenges, we developed an LLM-based hint-text generation model called HintDroid, which analyzes the GUI information of input components and uses in-context learning to generate the hint-text. To ensure the quality of hint-text generation, we further designed a feedback-based inspection mechanism to further adjust hint-text. The automated experiments demonstrate the high BLEU and a user study further confirms its usefulness. HintDroid can not only help visually impaired individuals, but also help ordinary people understand the requirements of input components. HintDroid demo video: https://youtu.be/FWgfcctRbfI.
AB - Mobile apps have become indispensable for accessing and participating in various environments, especially for low-vision users. Users with visual impairments can use screen readers to read the content of each screen and understand the content that needs to be operated. Screen readers need to read the hint-text attribute in the text input component to remind visually impaired users what to fill in. Unfortunately, based on our analysis of 4,501 Android apps with text inputs, over 76% of them are missing hint-text. These issues are mostly caused by developers' lack of awareness when considering visually impaired individuals. To overcome these challenges, we developed an LLM-based hint-text generation model called HintDroid, which analyzes the GUI information of input components and uses in-context learning to generate the hint-text. To ensure the quality of hint-text generation, we further designed a feedback-based inspection mechanism to further adjust hint-text. The automated experiments demonstrate the high BLEU and a user study further confirms its usefulness. HintDroid can not only help visually impaired individuals, but also help ordinary people understand the requirements of input components. HintDroid demo video: https://youtu.be/FWgfcctRbfI.
KW - App Accessibility
KW - Large Language Model
KW - Mobile App Design
KW - User Interface
UR - http://www.scopus.com/inward/record.url?scp=85194877946&partnerID=8YFLogxK
U2 - 10.1145/3613904.3642939
DO - 10.1145/3613904.3642939
M3 - Conference contribution
AN - SCOPUS:85194877946
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2024 - Proceedings of the 2024 CHI Conference on Human Factors in Computing Sytems
PB - Association for Computing Machinery
Y2 - 11 May 2024 through 16 May 2024
ER -