TY - GEN
T1 - On-line simultaneous learning and recognition of everyday activities from virtual reality performances
AU - Bates, Tamas
AU - Ramirez-Amaro, Karinne
AU - Inamura, Tetsunari
AU - Cheng, Gordon
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/12/13
Y1 - 2017/12/13
N2 - Capturing realistic human behaviors is essential to learn human models that can later be transferred to robots. Recent improvements in virtual reality (VR) head-mounted displays provide a viable way to collect natural examples of human behavior without the difficulties often associated with capturing performances in a physical environment. We present a realistic, cluttered, VR environment for experimentation with household tasks paired with a semantic extraction and reasoning system able to utilize data collected in real-time and apply ontology-based reasoning to learn and classify activities performed in VR. The system performs continuous segmentation of the motions of users' hands and simultaneously classifies known actions while learning new ones on demand. The system then constructs a graph of all related activities in the environment through its observations, extracting the task space utilized by observed users during their performance. The action recognition and learning system was able to maintain a high degree of accuracy of around 92% while dealing with a more complex and realistic environment compared to earlier work in both physical and virtual spaces.
AB - Capturing realistic human behaviors is essential to learn human models that can later be transferred to robots. Recent improvements in virtual reality (VR) head-mounted displays provide a viable way to collect natural examples of human behavior without the difficulties often associated with capturing performances in a physical environment. We present a realistic, cluttered, VR environment for experimentation with household tasks paired with a semantic extraction and reasoning system able to utilize data collected in real-time and apply ontology-based reasoning to learn and classify activities performed in VR. The system performs continuous segmentation of the motions of users' hands and simultaneously classifies known actions while learning new ones on demand. The system then constructs a graph of all related activities in the environment through its observations, extracting the task space utilized by observed users during their performance. The action recognition and learning system was able to maintain a high degree of accuracy of around 92% while dealing with a more complex and realistic environment compared to earlier work in both physical and virtual spaces.
UR - http://www.scopus.com/inward/record.url?scp=85041956487&partnerID=8YFLogxK
U2 - 10.1109/IROS.2017.8206193
DO - 10.1109/IROS.2017.8206193
M3 - Conference contribution
AN - SCOPUS:85041956487
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 3510
EP - 3515
BT - IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017
Y2 - 24 September 2017 through 28 September 2017
ER -