Curriculum Learning for Robot Manipulation Tasks With Sparse Reward Through Environment Shifts

Erdi Sayar, Giovanni Iacca, Alois Knoll

Research output: Contribution to journalArticlepeer-review


Multi-goal reinforcement learning (RL) with sparse rewards poses a significant challenge for RL methods. Hindsight experience replay (HER) addresses this challenge by learning from failures and replacing the desired goals with achieved states. However, HER often becomes inefficient when the desired goals are far away from the initial states. This paper introduces co-adapting hindsight experience replay with environment shifts (in short, COHER). COHER generates progressively more complex tasks as soon as the agent's success surpasses a predefined threshold. The generated tasks and agent are coupled to optimize the behavior of the agent within each task-agent pair. We evaluate COHER on various sparse reward robotic tasks that require obstacle avoidance capabilities and compare COHER with hindsight goal generation (HGG), curriculum-guided hindsight experience replay (CHER), and vanilla HER. The results show that COHER consistently outperforms the other methods and that the obtained policies can avoid obstacles without having explicit information about their position. Lastly, we deploy such policies to a real Franka robot for Sim2Real analysis. We observe that the robot can achieve the task by avoiding obstacles, whereas policies obtained with other methods cannot. The videos and code are publicly available at:

Original languageEnglish
Pages (from-to)46626-46635
Number of pages10
JournalIEEE Access
StatePublished - 2024


  • Curriculum learning-based reinforcement learning
  • hindsight experience replay
  • multi-goal reinforcement learning
  • robotic control


Dive into the research topics of 'Curriculum Learning for Robot Manipulation Tasks With Sparse Reward Through Environment Shifts'. Together they form a unique fingerprint.

Cite this