Deep learning-based breast region segmentation in raw and processed digital mammograms: generalization across views and vendors.

Abstract

RESULTS

The model trained on SFM and processed mammograms achieved a good overall performance regardless of projection and vendor, with a mean (±std. dev.) dice score of 0.96±0.06 for all datasets combined. When raw images were included in training, the mean (±std. dev.) dice score for the raw images was 0.95±0.05 and for the processed images was 0.96±0.04. Testing on a dataset with processed DMs from a vendor that was excluded from training resulted in a difference in mean dice varying between -0.23 to +0.02 from that of the fully trained model.

APPROACH

A U-Net was trained to segment mammograms into background, breast, and pectoral muscle. Eight different datasets, including two previously published public sets and six sets of DMs from as many different vendors, were used, totaling 322 screen film mammograms (SFMs) and 4251 DMs (2821 raw/processed pairs and 1430 only processed) from 1077 different women. Three experiments were done: first training on all SFM and processed images, second also including all raw images in training, and finally testing vendor generalization by leaving one dataset out at a time.

CONCLUSIONS

The proposed segmentation method yields accurate overall segmentation results for both raw and processed mammograms independent of view and vendor. The code and model weights are made available.

PURPOSE

We developed a segmentation method suited for both raw (for processing) and processed (for presentation) digital mammograms (DMs) that is designed to generalize across images acquired with systems from different vendors and across the two standard screening views.

More about this publication

Journal of medical imaging (Bellingham, Wash.)
  • Volume 11
  • Issue nr. 1
  • Pages 014001
  • Publication date 01-01-2024

This site uses cookies

This website uses cookies to ensure you get the best experience on our website.