Simultaneous Denoising and Localization Network for Photoacoustic Target Localization

SDL

ABSTRACT

A significant research problem of recent interest is the localization of targets like vessels, surgical needles, and tumors in photoacoustic (PA) images. To achieve accurate localization, a high photoacoustic signal-to-noise ratio (SNR) is required. However, this is not guaranteed for deep targets, as optical scattering causes an exponential decay in optical fluence with respect to tissue depth. To address this, we develop a novel deep learning method designed to explicitly exhibit robustness to noise present in photoacoustic radio-frequency (RF) data. More precisely, we describe and evaluate a deep neural network architecture consisting of a shared encoder and two parallel decoders. One decoder extracts the target coordinates from the input RF data while the other boosts the SNR and estimates clean RF data. The joint optimization of the shared encoder and dual decoders lends significant noise robustness to the features extracted by the encoder, which in turn enables the network to contain detailed information about deep targets that may be obscured by noise. Additional custom layers and newly proposed regularizers in the training loss function (designed based on observed RF data signal and noise behavior) serve to increase the SNR in the cleaned RF output and improve model performance. To account for depth-dependent strong optical scattering, our network was trained with simulated photoacoustic datasets of targets embedded at different depths inside tissue media of different scattering levels. The network trained on this novel dataset accurately locates targets in experimental PA data that is clinically relevant with respect to the localization of vessels, needles, or brachytherapy seeds. We verify the merits of the proposed architecture by outperforming the state of the art on both simulated and experimental datasets.

Source Code

Source code for SDL and the training and test data could be found at Github.

Proposed Model


Figure. simultaneous denoising and localization.

Data Sets

Four different photoacoustic datasets are discussed in this project:

  1. Allman et al's dataset (ref. 1).

  2. Johnstonbaugh et al's dataset (ref. 2).

  3. Our new practically representative simulated dataset.

  4. Experimentally captured dataset.

Simulated Dataset Generation


Figure. Details of generating samples with random number of targets and different scatteing levels and their corresponding beamformed images.

Selected Results


Table. Ablation study results of SDL and other variants over our simulated dataset.


Figure. Performance comparison between SDL, beamforming, and ref. 1.

Related Publications

  1. Amirsaeed Yazdani, S. Agrawal, K. Johnstonbaugh, R. Kothapalli, and V. Monga, "Simultaneous Denoising and Localization Network for Photoacoustic Target Localization," to appear in IEEE Transactions on Medical Imaging, 2021. [arXiv] [early access]

Selected References

  1. D. Allman et al., "Photoacoustic source detection and reflection artifact removal enabled by deep learning," IEEE Trans. on Medical Imaging, vol. 37, no. 6, pp. 1464–1477, 2018.

  2. K. Johnstonbaugh et al., "A deep learning approach to photoacoustic wavefront localization in deep-tissue medium," IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. pp. 1–1, 2020.

  3. N. Awasthi et al., "Deep neural network-based sinogram super-resolution and bandwidth enhancement for limited-data photoacoustic tomography,"IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control., ol. 67, no. 12, pp. 2660–2673, 2020.

Email
ipal.psu@gmail.com

Address
104 Electrical Engineering East,
University Park, PA 16802, USA

Lab Phone:
814-863-7810
814-867-4564