TY - GEN
T1 - Symbolic Task Compression in Structured Task Learning
AU - Saveriano, Matteo
AU - Seegerer, Michael
AU - Caccavale, Riccardo
AU - Finzi, Alberto
AU - Lee, Dongheui
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/3/26
Y1 - 2019/3/26
N2 - Learning everyday tasks from human demonstrations requires unsupervised segmentation of seamless demonstrations, which may result in highly fragmented and widely spread symbolic representations. Since the time needed to plan the task depends on the amount of possible behaviors, it is preferable to keep the number of behaviors as low as possible. In this work, we present an approach to simplify the symbolic representation of a learned task which leads to a reduction of the number of possible behaviors. The simplification is achieved by merging sequential behaviors, i.e. behaviors which are logically sequential and act on the same object. Assuming that the task at hand is encoded in a rooted tree, the approach traverses the tree searching for sequential nodes (behaviors) to merge. Using simple rules to assign pre- and post-conditions to each node, our approach significantly reduces the number of nodes, while keeping unaltered the task flexibility and avoiding perceptual aliasing. Experiments on automatically generated and learned tasks show a significant reduction of the planning time.
AB - Learning everyday tasks from human demonstrations requires unsupervised segmentation of seamless demonstrations, which may result in highly fragmented and widely spread symbolic representations. Since the time needed to plan the task depends on the amount of possible behaviors, it is preferable to keep the number of behaviors as low as possible. In this work, we present an approach to simplify the symbolic representation of a learned task which leads to a reduction of the number of possible behaviors. The simplification is achieved by merging sequential behaviors, i.e. behaviors which are logically sequential and act on the same object. Assuming that the task at hand is encoded in a rooted tree, the approach traverses the tree searching for sequential nodes (behaviors) to merge. Using simple rules to assign pre- and post-conditions to each node, our approach significantly reduces the number of nodes, while keeping unaltered the task flexibility and avoiding perceptual aliasing. Experiments on automatically generated and learned tasks show a significant reduction of the planning time.
KW - Learning from demonstration
KW - Structured Task Learning
KW - Task symplification
UR - http://www.scopus.com/inward/record.url?scp=85064134896&partnerID=8YFLogxK
U2 - 10.1109/IRC.2019.00033
DO - 10.1109/IRC.2019.00033
M3 - Conference contribution
AN - SCOPUS:85064134896
T3 - Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019
SP - 171
EP - 176
BT - Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd IEEE International Conference on Robotic Computing, IRC 2019
Y2 - 25 February 2019 through 27 February 2019
ER -