TY - GEN
T1 - The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue
AU - Foster, Mary Ellen
AU - Bard, Ellen Gurman
AU - Guhe, Markus
AU - Hill, Robin L.
AU - Oberlander, Jon
AU - Knoll, Alois
PY - 2008
Y1 - 2008
N2 - Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; however, other sources of contextual information are not normally used. Also, the generated references normally consist only of language and, possibly, deictic pointing gestures. When referring to objects in the context of a task-based interaction involving jointly manipulating objects, a much richer notion of context is available, which permits a wider range of referring options. In particular, when conversational partners cooperate on a mutual task in a shared environment, objects can be made accessible simply by manipulating them as part of the task. We demonstrate that such expressions are common in a corpus of human-human dialogues based on constructing virtual objects, and then describe how this type of reference can be incorporated into the output of a humanoid robot that engages in similar joint construction dialogues with a human partner.
AB - Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; however, other sources of contextual information are not normally used. Also, the generated references normally consist only of language and, possibly, deictic pointing gestures. When referring to objects in the context of a task-based interaction involving jointly manipulating objects, a much richer notion of context is available, which permits a wider range of referring options. In particular, when conversational partners cooperate on a mutual task in a shared environment, objects can be made accessible simply by manipulating them as part of the task. We demonstrate that such expressions are common in a corpus of human-human dialogues based on constructing virtual objects, and then describe how this type of reference can be incorporated into the output of a humanoid robot that engages in similar joint construction dialogues with a human partner.
KW - Multimodal dialogue
KW - Referring expressions
UR - http://www.scopus.com/inward/record.url?scp=77649242754&partnerID=8YFLogxK
U2 - 10.1145/1349822.1349861
DO - 10.1145/1349822.1349861
M3 - Conference contribution
AN - SCOPUS:77649242754
SN - 9781605580173
T3 - HRI 2008 - Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction: Living with Robots
SP - 295
EP - 302
BT - HRI 2008 - Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction
T2 - 3rd ACM/IEEE International Conference on Human-Robot Interaction, HRI 2008
Y2 - 12 March 2008 through 15 March 2008
ER -