TY - GEN
T1 - LieGrasPFormer
T2 - 19th IEEE International Conference on Automation Science and Engineering, CASE 2023
AU - Lin, Jianjie
AU - Rickert, Markus
AU - Knoll, Alois
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - With the significant advancements made in the 6-DOF Grasp learning network, grasp selection for unseen objects has garnered much attention. However, most existing approaches rely on complex sequence pipelines to generate potential Grasp, which can be challenging to implement. In this work, we propose an end-to-end grasp detection network that can create a diverse and accurate 6-DOF grasp posture based solely on pure point clouds. We utilize the hierarchical PointNet++ with a skip-connection point transformer encoder block to extract contextual local region point features, which we refer to as LieGrasPFormer. This network can efficiently generate a distribution of 6-DoF parallel-jaw grasps directly from a pure point cloud. Moreover, we introduce two different grasp detection loss functions, which give the neural network the ability to generalize to unseen objects, such as generators. These loss functions also enable a continuously differentiable property for the network. We trained our LieGrasPFormer using the synthesized grasp dataset ACRONYM, which contains 17 million parallel-jaw grasps, and found that it generalized well with an actual scanned YCB dataset consisting of 77 objects. Finally, we conducted experiments in the PyBullet simulator, which showed that our proposed grasp detection network can outperform most state-of-the-art approaches with respect to the grasp success rate.
AB - With the significant advancements made in the 6-DOF Grasp learning network, grasp selection for unseen objects has garnered much attention. However, most existing approaches rely on complex sequence pipelines to generate potential Grasp, which can be challenging to implement. In this work, we propose an end-to-end grasp detection network that can create a diverse and accurate 6-DOF grasp posture based solely on pure point clouds. We utilize the hierarchical PointNet++ with a skip-connection point transformer encoder block to extract contextual local region point features, which we refer to as LieGrasPFormer. This network can efficiently generate a distribution of 6-DoF parallel-jaw grasps directly from a pure point cloud. Moreover, we introduce two different grasp detection loss functions, which give the neural network the ability to generalize to unseen objects, such as generators. These loss functions also enable a continuously differentiable property for the network. We trained our LieGrasPFormer using the synthesized grasp dataset ACRONYM, which contains 17 million parallel-jaw grasps, and found that it generalized well with an actual scanned YCB dataset consisting of 77 objects. Finally, we conducted experiments in the PyBullet simulator, which showed that our proposed grasp detection network can outperform most state-of-the-art approaches with respect to the grasp success rate.
UR - http://www.scopus.com/inward/record.url?scp=85174397515&partnerID=8YFLogxK
U2 - 10.1109/CASE56687.2023.10260543
DO - 10.1109/CASE56687.2023.10260543
M3 - Conference contribution
AN - SCOPUS:85174397515
T3 - IEEE International Conference on Automation Science and Engineering
BT - 2023 IEEE 19th International Conference on Automation Science and Engineering, CASE 2023
PB - IEEE Computer Society
Y2 - 26 August 2023 through 30 August 2023
ER -