Abstract | We propose a guided registration method for spatially aligning a fixed preoperative image and untracked ultrasound image slices. We exploit the unique interactive and spatially heterogeneous nature of this application to develop a registration algorithm that interactively suggests and acquires ultrasound images at optimised locations (with respect to registration performance). Our framework is based on two trainable functions: (1) a deep hyper-network-based registration function, which is generalisable over varying location and deformation, and adaptable at test-time; (2) a reinforcement learning function for producing test-time estimates of image acquisition locations and adapted deformation regularisation (the latter is required due to varying acquisition locations). We evaluate our proposed method with real preoperative patient data, and simulated intraoperative data with variable field-of-view. In addition to simulation of intraoperative data, we simulate global alignment based on previous work for efficient training, and investigate probe-level guidance towards an improved deformable registration. The evaluation in a simulated environment shows statistically significant improvements in overall registration performance across a variety of metrics for our proposed method, compared to registration without acquisition guidance or adaptable deformation regularisation, and to commonly used classical iterative methods and learning-based registration. For the first time, efficacy of proactive image acquisition is demonstrated in a simulated surgical interventional registration, in contrast to most existing work addressing registration post-data-acquisition, one of the reasons we argue may have led to previously under-constrained nonrigid registration in such applications. Code: https://github.com/s-sd/rl_guided_registration. |
---|