TY - GEN
T1 - Visual repetition sampling for robot manipulation planning
AU - Puang, En Yen
AU - Lehner, Peter
AU - Marton, Zoltan Csaba
AU - Durner, Maximilian
AU - Triebel, Rudolph
AU - Albu-Schaffer, Alin
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5
Y1 - 2019/5
N2 - One of the main challenges in sampling-based motion planners is to find an efficient sampling strategy. While methods such as Rapidly-exploring Random Tree (RRT) have shown to be more reliable in complex environments than optimization-based methods, they often require longer planning times, which reduces their usability for real-time applications. Recently, biased sampling methods have shown to remedy this issue. For example Gaussian Mixture Models (GMMs) have been used to sample more efficiently in feasible regions of the configuration space. Once the GMM is learned, however, this approach does not adapt its biases to individual planning scene during inference. Hence, we propose in this work a more efficient sampling strategy to further bias the GMM based on visual input upon query. We employ an autoencoder trained entirely in simulation to extract features from depth images and use the latent representation to adjust the weights of each mixture components in the GMM. We show empirically that this improves the sampling efficiency of an RRT motion planner in both real and simulated scenes.
AB - One of the main challenges in sampling-based motion planners is to find an efficient sampling strategy. While methods such as Rapidly-exploring Random Tree (RRT) have shown to be more reliable in complex environments than optimization-based methods, they often require longer planning times, which reduces their usability for real-time applications. Recently, biased sampling methods have shown to remedy this issue. For example Gaussian Mixture Models (GMMs) have been used to sample more efficiently in feasible regions of the configuration space. Once the GMM is learned, however, this approach does not adapt its biases to individual planning scene during inference. Hence, we propose in this work a more efficient sampling strategy to further bias the GMM based on visual input upon query. We employ an autoencoder trained entirely in simulation to extract features from depth images and use the latent representation to adjust the weights of each mixture components in the GMM. We show empirically that this improves the sampling efficiency of an RRT motion planner in both real and simulated scenes.
UR - http://www.scopus.com/inward/record.url?scp=85071487871&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2019.8793942
DO - 10.1109/ICRA.2019.8793942
M3 - Conference contribution
AN - SCOPUS:85071487871
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 9236
EP - 9242
BT - 2019 International Conference on Robotics and Automation, ICRA 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 International Conference on Robotics and Automation, ICRA 2019
Y2 - 20 May 2019 through 24 May 2019
ER -