TY - GEN
T1 - Automatic segmentation and recognition of human activities from observation based on semantic reasoning
AU - Ramirez-Amaro, Karinne
AU - Beetz, Michael
AU - Cheng, Gordon
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/10/31
Y1 - 2014/10/31
N2 - Automatically segmenting and recognizing human activities from observations typically requires a very complex and sophisticated perception algorithm. Such systems would be unlikely implemented on-line into a physical system, such as a robot, due to the pre-processing step(s) that those vision systems usually demand. In this work, we present and demonstrate that with an appropriate semantic representation of the activity, and without such complex perception systems, it is sufficient to infer human activities from videos. First, we will present a method to extract the semantic rules based on three simple hand motions, i.e. move, not move and tool use. Additionally, the information of the object properties either ObjectActedOn or ObjectInHand are used. Such properties encapsulate the information of the current context. The above data is used to train a decision tree to obtain the semantic rules employed by a reasoning engine. This means, we extract lower-level information from videos and we reason about the intended human behaviors (high-level). The advantage of the abstract representation is that it allows to obtain more generic models out of human behaviors, even when the information is obtained from different scenarios. The results show that our system correctly segments and recognizes human behaviors with an accuracy of 85%. Another important aspect of our system is its scalability and adaptability toward new activities, which can be learned on-demand. Our system has been fully implemented on a humanoid robot, the iCub to experimentally validate the performance and the robustness of our system during on-line execution of the robot.
AB - Automatically segmenting and recognizing human activities from observations typically requires a very complex and sophisticated perception algorithm. Such systems would be unlikely implemented on-line into a physical system, such as a robot, due to the pre-processing step(s) that those vision systems usually demand. In this work, we present and demonstrate that with an appropriate semantic representation of the activity, and without such complex perception systems, it is sufficient to infer human activities from videos. First, we will present a method to extract the semantic rules based on three simple hand motions, i.e. move, not move and tool use. Additionally, the information of the object properties either ObjectActedOn or ObjectInHand are used. Such properties encapsulate the information of the current context. The above data is used to train a decision tree to obtain the semantic rules employed by a reasoning engine. This means, we extract lower-level information from videos and we reason about the intended human behaviors (high-level). The advantage of the abstract representation is that it allows to obtain more generic models out of human behaviors, even when the information is obtained from different scenarios. The results show that our system correctly segments and recognizes human behaviors with an accuracy of 85%. Another important aspect of our system is its scalability and adaptability toward new activities, which can be learned on-demand. Our system has been fully implemented on a humanoid robot, the iCub to experimentally validate the performance and the robustness of our system during on-line execution of the robot.
UR - http://www.scopus.com/inward/record.url?scp=84911474575&partnerID=8YFLogxK
U2 - 10.1109/IROS.2014.6943279
DO - 10.1109/IROS.2014.6943279
M3 - Conference contribution
AN - SCOPUS:84911474575
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 5043
EP - 5048
BT - IROS 2014 Conference Digest - IEEE/RSJ International Conference on Intelligent Robots and Systems
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014
Y2 - 14 September 2014 through 18 September 2014
ER -