TY - JOUR
T1 - Precise Repositioning of Robotic Ultrasound
T2 - Improving Registration-based Motion Compensation using Ultrasound Confidence Optimization
AU - Jiang, Zhongliang
AU - Danis, Nehil
AU - Bi, Yuan
AU - Zhou, Mingchuan
AU - Kroenke, Markus
AU - Wendler, Thomas
AU - Navab, Nassir
N1 - Publisher Copyright:
IEEE
PY - 2022
Y1 - 2022
N2 - Robotic ultrasound (US) imaging has been seen as a promising solution to overcome the limitations of free-hand US examinations, i.e., inter-operator variability. However, the fact that robotic US systems cannot react to subject movements during scans limits their clinical acceptance. Regarding human sonographers, they often react to patient movements by repositioning the probe or even restarting the acquisition, in particular for the scans of anatomies with long structures like limb arteries. To realize this characteristic, we proposed a vision-based system to monitor the subject’s movement and automatically update the scan trajectory thus seamlessly obtaining a complete 3D image of the target anatomy. The motion monitoring module is developed using the segmented object masks from RGB images. Once the subject is moved, the robot will stop and recompute a suitable trajectory by registering the surface point clouds of the object obtained before and after the movement using the iterative closest point algorithm. Afterward, to ensure optimal contact conditions after repositioning US probe, a confidence-based fine-tuning process is used to avoid potential gaps between the probe and contact surface. Finally, the whole system is validated on a human-like arm phantom with an uneven surface, while the object segmentation network is also validated on volunteers. The results demonstrate that the presented system can react to object movements and reliably provide accurate 3D images.
AB - Robotic ultrasound (US) imaging has been seen as a promising solution to overcome the limitations of free-hand US examinations, i.e., inter-operator variability. However, the fact that robotic US systems cannot react to subject movements during scans limits their clinical acceptance. Regarding human sonographers, they often react to patient movements by repositioning the probe or even restarting the acquisition, in particular for the scans of anatomies with long structures like limb arteries. To realize this characteristic, we proposed a vision-based system to monitor the subject’s movement and automatically update the scan trajectory thus seamlessly obtaining a complete 3D image of the target anatomy. The motion monitoring module is developed using the segmented object masks from RGB images. Once the subject is moved, the robot will stop and recompute a suitable trajectory by registering the surface point clouds of the object obtained before and after the movement using the iterative closest point algorithm. Afterward, to ensure optimal contact conditions after repositioning US probe, a confidence-based fine-tuning process is used to avoid potential gaps between the probe and contact surface. Finally, the whole system is validated on a human-like arm phantom with an uneven surface, while the object segmentation network is also validated on volunteers. The results demonstrate that the presented system can react to object movements and reliably provide accurate 3D images.
KW - Arteries
KW - Cameras
KW - Probes
KW - Robot vision systems
KW - Robotic ultrasound
KW - Robots
KW - Three-dimensional displays
KW - Trajectory
KW - blood vessel visualization
KW - medical robotics
KW - vision-based control
UR - http://www.scopus.com/inward/record.url?scp=85137564572&partnerID=8YFLogxK
U2 - 10.1109/TIM.2022.3200360
DO - 10.1109/TIM.2022.3200360
M3 - Article
AN - SCOPUS:85137564572
SN - 0018-9456
SP - 1
JO - IEEE Transactions on Instrumentation and Measurement
JF - IEEE Transactions on Instrumentation and Measurement
ER -