Optimizing Breast Cancer Detection in Mammograms: A Comprehensive Study of Transfer Learning, Resolution Reduction, and Multi-View Classification
Paper
•
2503.19945
•
Published
•
1
This is the ConvNext based multiple-view model from the paper: https://arxiv.org/abs/2503.19945.
This model achieved the best (to my knowledge and by the time of the paper) AUC for Vindr-Mammo dataset.
This model achieves an AUC of 0.8511, considering two-views (CC and MLO) of each side of each exam.
To evaluate the performance of classifiers, we grouped the Bi-RADS categories into two broader classes: “Normal” for views classified as Bi-RADS 1 and 2, and “Abnormal” for views classified as Bi-RADS 3, 4, and 5.
Please find code and inference instructions here: https://github.com/dpetrini/multiple-view
Totally Free + Zero Barriers + No Login Required