Ultrasound-guided robotic navigation with deep reinforcement learning

Hannes Hase, Mohammad Farid Azampour, Maria Tirindelli, Magdalini Paschali, Walter Simson, Emad Fatemizadeh, Nassir Navab

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

25 Scopus citations

Abstract

In this paper we introduce the first reinforcement learning (RL) based robotic navigation method which utilizes ultrasound (US) images as an input. Our approach combines state-of-the-art RL techniques, specifically deep Q-networks (DQN) with memory buffers and a binary classifier for deciding when to terminate the task.Our method is trained and evaluated on an in-house collected data-set of 34 volunteers and when compared to pure RL and supervised learning (SL) techniques, it performs substantially better, which highlights the suitability of RL navigation for US-guided procedures. When testing our proposed model, we obtained a 82.91% chance of navigating correctly to the sacrum from 165 different starting positions on 5 different unseen simulated environments.

Original languageEnglish
Title of host publication2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5534-5541
Number of pages8
ISBN (Electronic)9781728162126
DOIs
StatePublished - 24 Oct 2020
Event2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020 - Las Vegas, United States
Duration: 24 Oct 202024 Jan 2021

Publication series

NameIEEE International Conference on Intelligent Robots and Systems
ISSN (Print)2153-0858
ISSN (Electronic)2153-0866

Conference

Conference2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
Country/TerritoryUnited States
CityLas Vegas
Period24/10/2024/01/21

Fingerprint

Dive into the research topics of 'Ultrasound-guided robotic navigation with deep reinforcement learning'. Together they form a unique fingerprint.

Cite this