Deep Learning for Ultrasound Image Formation: CUBDL Evaluation Framework and Open Datasets

Dongwoon Hyun, Alycen Wiacek, Sobhan Goudarzi, Sven Rothlubbers, Amir Asif, Klaus Eickel, Yonina C. Eldar, Jiaqi Huang, Massimo Mischi, Hassan Rivaz, David Sinden, Ruud J.G. van Sloun, Hannah Strohm, Muyinatu A. Lediju Bell (Corresponding author)

Research output: Contribution to journalArticleAcademicpeer-review

60 Citations (Scopus)
207 Downloads (Pure)

Abstract

Deep learning for ultrasound image formation is rapidly garnering research support and attention, quickly rising as the latest frontier in ultrasound image formation, with much promise to balance both image quality and display speed. Despite this promise, one challenge with identifying optimal solutions is the absence of unified evaluation methods and datasets that are not specific to a single research group. This article introduces the largest known international database of ultrasound channel data and describes the associated evaluation methods that were initially developed for the challenge on ultrasound beamforming with deep learning (CUBDL), which was offered as a component of the 2020 IEEE International Ultrasonics Symposium. We summarize the challenge results and present qualitative and quantitative assessments using both the initially closed CUBDL evaluation test dataset (which was crowd-sourced from multiple groups around the world) and additional in vivo breast ultrasound data contributed after the challenge was completed. As an example quantitative assessment, single plane wave images from the CUBDL Task 1 dataset produced a mean generalized contrast-to-noise ratio (gCNR) of 0.67 and a mean lateral resolution of 0.42 mm when formed with delay-and-sum beamforming, compared with a mean gCNR as high as 0.81 and a mean lateral resolution as low as 0.32 mm when formed with networks submitted by the challenge winners. We also describe contributed CUBDL data that may be used for training of future networks. The compiled database includes a total of 576 image acquisition sequences. We additionally introduce a neural-network-based global sound speed estimator implementation that was necessary to fairly evaluate the results obtained with this international database. The integration of CUBDL evaluation methods, evaluation code, network weights from the challenge winners, and all datasets described herein are publicly available (visit https://cubdl.jhu.edu for details).

Original languageEnglish
Article number9475029
Pages (from-to)3466-3483
Number of pages18
JournalIEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control
Volume68
Issue number12
DOIs
Publication statusPublished - 1 Dec 2021

Bibliographical note

Funding Information:
The work of Alycen Wiacek and Muyinatu A. Lediju Bell was supported by the National Institutes of Health (NIH) Trailblazer Award under Grant R21 EB025621. The work of Sobhan Goudarzi and Hassan Rivaz was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) under Grant RGPIN-2020-04612.

Funding

The work of Alycen Wiacek and Muyinatu A. Lediju Bell was supported by the National Institutes of Health (NIH) Trailblazer Award under Grant R21 EB025621. The work of Sobhan Goudarzi and Hassan Rivaz was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) under Grant RGPIN-2020-04612.

FundersFunder number
National Institutes of Health
National Institute of Biomedical Imaging and BioengineeringR21EB025621
Natural Sciences and Engineering Research Council of CanadaRGPIN-2020-04612

    Keywords

    • Beamforming
    • channel data
    • deep learning benchmark
    • evaluation metrics
    • neural networks
    • open science
    • sound speed estimation
    • ultrasound image formation

    Fingerprint

    Dive into the research topics of 'Deep Learning for Ultrasound Image Formation: CUBDL Evaluation Framework and Open Datasets'. Together they form a unique fingerprint.

    Cite this