MammologixSM

In-Depth · MammoComply Knowledge Base

The Breast MRI Medical Outcome Audit: What It Is, How It Works, and Why It Requires a Different Methodology Than Mammography

By Richard D. Lippert, Jr. · B.S., (R.T.)(R)|May 1, 2026|17 min read
Share
ACR Accreditation StandardNot MQSA MandatedThree-Stream ArchitectureBI-RADS® BenchmarkedACR BI-RADS® v2025

“Breast MRI is the most sensitive imaging modality available for breast cancer detection. Yet unlike mammography, breast MRI outcome auditing carries no federal mandate — and ACR accreditation expectations are not the same as MQSA requirements. Understanding that distinction is the starting point for any program that takes quality accountability seriously.”

Key Takeaways

1

Breast MRI outcome auditing is not required by MQSA, but ACR-accredited breast MRI facilities are expected to maintain a medical outcomes audit program. These are not the same obligation.

2

Three clinically distinct MRI indication streams — screening, diagnostic, and preoperative — must be audited independently. Pooling them produces metrics that are uninterpretable against any published benchmark.

3

The definition of a “positive” examination differs by indication: BI-RADS® 3 is positive in screening MRI, but negative in diagnostic MRI. Misapplying this definition distorts every recall-based metric.

4

The formal ACR BI-RADS® benchmark for screening MRI CDR is 20–30 per 1,000 examinations. The BCSC community-observed rate is 17 per 1,000. These are different reference values and should be presented separately.

5

Small examination volumes produce statistically unstable metrics that can appear severely out of range while reflecting sample size, not interpretive performance. Approximately 500–1,000 screening MRI examinations are required before benchmark comparison is informative.

6

Diagnostic MRI performance reference ranges are not formal ACR BI-RADS® benchmarks in the same sense as screening MRI thresholds. They are published reference values from community-practice studies and should be labeled as such.

Introduction

The mammography medical outcome audit is one of the most thoroughly regulated quality assurance processes in diagnostic imaging. Under the Mammography Quality Standards Act (MQSA), every certified mammography facility in the United States is required to conduct an annual medical outcome audit, link imaging interpretations to pathologically confirmed outcomes, calculate a defined set of key performance indicators (KPIs), and compare those results against published benchmarks. Federal inspectors verify compliance annually.

Breast MRI operates under a different framework — one that is often misunderstood in two directions simultaneously. Some programs assume that because MQSA does not mandate breast MRI auditing, no formal audit expectation exists. Others apply mammography audit methodology directly to MRI data, producing metrics that are structurally wrong for the population being screened and the modality being evaluated. Neither approach serves the purpose of quality accountability.

This article explains the methodology for conducting a rigorous breast MRI medical outcome audit: what it measures, how it differs from the mammography audit at the level of methodology rather than just regulatory obligation, how three distinct clinical indication streams must be approached separately, and what clinicians and program administrators need to understand in order to interpret audit results correctly.

The Regulatory Foundation — and Its Limits

The MQSA, enacted by Congress in 1992, established federal quality standards for mammography. The 2023 Final Rule (Federal Register, March 10, 2023; 88 FR 15126) is the most significant update to those standards since 1997 — overhauling imaging assessment terminology, expanding density reporting requirements, and strengthening enforcement mechanisms. It did not change one fundamental parameter: the MQSA applies exclusively to mammographic imaging. The statute does not provide for the establishment of requirements related to breast MRI, and the regulations have not been amended to include MRI. [7]

However, the absence of an MQSA mandate does not mean that breast MRI facilities operate in a quality accountability vacuum. The ACR Breast MRI Accreditation Program expects each accredited facility to establish and maintain a medical outcomes audit program to follow positive assessments and correlate pathology results with interpreting physician findings. That expectation is not a federal inspection requirement — it is an accreditation standard — and the distinction matters:

MQSA

Federal law, enforced by annual FDA inspection, applies exclusively to mammographic imaging. Non-compliance carries federal consequences.

ACR Breast MRI Accreditation

A professional accreditation program that carries an outcomes audit expectation as a program standard. Enforcement is through accreditation standing, not federal inspection.

The ACR BI-RADS® Atlas, 5th Edition (Sickles & D’Orsi, 2013) extended the outcome monitoring framework to breast MRI, establishing cross-modality KPI definitions and recommending outcome monitoring as a best practice. [1] The Breast Cancer Surveillance Consortium (BCSC) has published the only large-scale community benchmark data for screening breast MRI (Lee JM et al., Radiology, 2017; n = 8,387 examinations). [2] Together, these sources form the evidence base for breast MRI outcome auditing.

Why Breast MRI Auditing Is Structurally Different From Mammography Auditing

It is tempting to treat the breast MRI audit as a version of the mammography audit applied to a different modality. It is not. There are five structural differences that make the breast MRI audit materially more complex.

1

The Screening Population Has a Fundamentally Higher Cancer Prevalence

Mammography screens the general average-risk population. Breast MRI screens a high-risk population: women with BRCA1/2 mutations or first-degree relatives of carriers; women with a lifetime risk ≥20% by validated risk-assessment models; women with a history of prior chest wall irradiation between ages 10 and 30; and women with a personal history of treated breast cancer. [1]

This distinction is not administrative. It is the reason breast MRI CDR benchmarks (20–30 per 1,000 per BI-RADS®) are not comparable to mammography CDR benchmarks (~5–6 per 1,000). The underlying cancer prevalence in the population being screened is structurally higher.
2

The Definition of a "Positive" Screening Examination Differs by Modality

For screening mammography, a positive (abnormal) interpretation is BI-RADS® 0, 4, or 5. For screening breast MRI, the positive assessment set is broader: BI-RADS® 0, 3, 4, or 5. [1, 2]BI-RADS® 3 (“probably benign”) receives this designation in the screening MRI context because it interrupts the annual screening cycle — triggering short-interval follow-up before return to routine annual screening — which meets the functional definition of an abnormal result.

This directly affects the calculation of Abnormal Interpretation Rate, Recall Rate, and PPV1. A practice applying the mammographic positive assessment definition to its MRI audit will systematically miscalculate every recall-based metric.
3

The Positive Assessment Definition Reverses in Diagnostic MRI

In the diagnostic context, BI-RADS® 3 is treated as negative. A diagnostic MRI resulting in BI-RADS® 3 carries the recommendation for short-interval follow-up — a routine management pathway, not an abnormal interpretation requiring further workup in the same examination cycle.

A facility using a single positive assessment definition across both screening and diagnostic populations will produce metrics that are wrong for both. The peer-reviewed literature is explicit: “performance measures differ significantly between screening and diagnostic MRI indications and must be calculated separately.” [3]
4

Three Distinct Clinical Indication Streams Must Never Be Combined

Breast MRI is ordered for screening, diagnostic evaluation, and preoperative extent-of-disease assessment. These three populations have different cancer prevalence rates, different positive assessment definitions, different applicable benchmark frameworks, and — in the preoperative case — the fundamental feature that malignancy is known to be present at the time of the examination. Pooling them in a single audit produces performance metrics that are uninterpretable against any published standard.

Lee CI et al. (Academic Radiology, 2014), using BCSC data across 11,654 breast MRI examinations, demonstrated that AIRs differed significantly across indication categories and concluded that “practices should stratify breast MRI examinations by indication for quality assurance and auditing purposes.” [4]
5

The Benchmark Evidence Base Is Comparatively Thin

The mammography benchmark literature rests on tens of millions of examinations under MQSA-mandated data collection over three decades. The breast MRI screening benchmark literature rests primarily on a single BCSC publication covering 8,387 examinations from 2005 to 2013, [2] supplemented by a small number of community-practice studies. [3, 8] Diagnostic MRI reference values are less established still. Preoperative MRI benchmarks are newly emerging. [6]

Statistical interpretation context is therefore not optional boilerplate — it is a clinical necessity.

The Three-Stream Audit Architecture

A methodologically sound breast MRI medical outcome audit is organized around three independent streams — each with its own examination population, positive assessment definition, outcome assignment window, and benchmark or reference framework. No cross-stream pooling is performed.

Stream 1

Screening MRI — The Current Standard of Practice

Examination Population

Asymptomatic women at elevated lifetime risk

Positive Assessment

BI-RADS® 0, 3, 4, or 5 (BI-RADS® 3 is positive)

Outcome Window

Tissue diagnosis within 12 months

Key Performance Indicators — Two-Tier Benchmark Framework

An essential discipline in screening MRI audit reporting is maintaining the distinction between the formal ACR BI-RADS® benchmark range and the BCSC community-observed performance value. These are different reference standards. The BI-RADS® range represents the published target; the BCSC observed value represents what community practice actually achieves. Both values belong in an audit report; neither should substitute for the other.

Performance MeasureFormulaBI-RADS® Benchmark (Formal)BCSC Community Observed
Recall Rate / AIRPositive exams ÷ Total exams × 10010% – 25%~12–16%
PPV1TP ÷ (TP + FP1) × 1003% – 8%~10–15%
CDRTP ÷ Total exams × 1,00020–30 per 1,00017 per 1,000 (95% CI 15–20)
SensitivityTP ÷ (TP + FN) × 100≥ 80%81% (95% CI 75–86%)
SpecificityTN ÷ (TN + FP) × 10085% – 90%83% (95% CI 82–84%)

Sources: ACR BI-RADS® Atlas, 5th Edition [1]; Lee JM et al., Radiology, 2017 [2].

Benchmark Interpretation Note

The BCSC community-observed CDR of 17 per 1,000 falls below the formal BI-RADS® benchmark range of 20–30 per 1,000. This reflects performance in a mixed-risk community registry population. Programs should not use 17 per 1,000 as a benchmark floor, nor should they present an internal action threshold below 20 per 1,000 as equivalent to the BI-RADS® range. Similarly, the BCSC observed PPV1 of ~10–15% and specificity of 83% fall outside the formal BI-RADS® ranges — an expected consequence of population mix effects in community data that reinforces the need for explicit interpretation context in every audit report.

Stream 2

Diagnostic MRI — A Different Population, Different Rules

Examination Population

Problem-solving, symptom evaluation, short-interval follow-up

Positive Assessment

BI-RADS® 4 or 5 only — BI-RADS® 3 is negative here

Outcome Window

Tissue diagnosis within 12 months

Published Reference Values for Diagnostic MRI

Labeling Note

The following are published reference values and practical comparison ranges drawn from community-practice studies. [3, 4] They are not formal ACR BI-RADS® benchmark thresholds in the sense that the screening MRI values above are. No published equivalent of the BI-RADS® screening benchmark table exists for diagnostic MRI. Programs should label these as reference values — not benchmarks — when presenting audit results.

Performance MeasureFormulaPublished Reference ValueSource
PPV2TP ÷ (TP + FP2) × 10020% – 40%Niell BL et al. [3]
PPV3TP ÷ (TP + FP3) × 10025% – 45%Niell BL et al. [3]
CDRTP ÷ Total exams × 1,000~47 per 1,000 *Niell BL et al. [3]
SensitivityTP ÷ (TP + FN) × 100≥ 80%Lee CI et al. [4]
SpecificityTN ÷ (TN + FP) × 10085% – 90%Lee CI et al. [4]

* Diagnostic CDR of ~47 per 1,000 reported by Niell et al. [3] is reference only; no formal BI-RADS® diagnostic CDR benchmark has been published. This value reflects the substantially higher cancer prevalence in diagnostic populations and should not be compared to screening CDR.

Stream 3

Preoperative / Extent-of-Disease MRI — The Emerging Third Stream

Examination Population

Women with confirmed breast cancer — extent-of-disease evaluation

Positive Assessment

BI-RADS® 4 or 5; contralateral occult cancer detection

Outcome Window

Pathological correlation at surgical excision

Benchmark Status

The ACR BI-RADS® v2025 Manual formally introduces audit guidance for preoperative breast MRI for the first time. [6] Cohen EO et al. (Radiology, 2025) established the feasibility of this framework in community practice, reporting preliminary performance data: AIR 30.3%, PPV2 22.8%, PPV3 32.2%, and contralateral cancer detection rate 90.7 per 1,000. [6] Formal benchmark ranges for this stream remain under active development.

Malignancy is known to be present in the ipsilateral breast at the time of examination. This makes sensitivity and CDR calculations for ipsilateral disease structurally incomparable to surveillance or problem-solving contexts. Including preoperative examinations in a screening or diagnostic audit pool invalidates every calculated metric for its intended purpose.

Reading the Metrics: An Illustrative Example

The following example uses constructed data for a hypothetical facility — Anywhere Breast Imaging Practice — to illustrate how Stream 1 (Screening MRI) results are calculated and contextualized against the two-tier benchmark framework. All data are illustrative.

Audit Period: January 2025 – December 2025  ·  Screening MRI: n = 297  ·  Diagnostic MRI: n = 84
Underlying screening counts: 44 abnormal interpretations (BI-RADS® 0/3/4/5) of 297 total; 7 screen-detected cancers (TP); 1 interval cancer (FN); 37 false positive recalls (FP1); 253 true negatives (TN).

Screening Stream Results

KPICalculated ValueBI-RADS® BenchmarkBCSC ObservedStatus
Recall Rate14.8%10% – 25%~12–16%✓ Within Range
AIR14.8%10% – 25%~12–16%✓ Within Range
PPV115.9%3% – 8%~10–15%↑ Above BI-RADS® Range
CDR23.6 / 1,00020–30 / 1,00017 / 1,000✓ Within Range
Sensitivity87.5%≥ 80%81%✓ Within Range
Specificity87.2%85% – 90%83%✓ Within Range

Interpretation Notes

CDR at 23.6 per 1,000 — Within BI-RADS® Range

A screening CDR of 23.6 per 1,000 falls within the formal BI-RADS® benchmark range of 20–30 per 1,000 and exceeds the BCSC community-observed rate of 17 per 1,000. Both reference values should be documented: the BI-RADS® range provides the formal target; the BCSC observed rate provides community context. This program is detecting cancer at a rate consistent with the BI-RADS® expectation.

PPV1 at 15.9% — Above BI-RADS® Range

A PPV1 of 15.9% exceeds the formal BI-RADS® benchmark of 3–8% and also falls above the BCSC community-observed range of ~10–15%. In isolation, this might suggest over-calling. In the context of a CDR within the BI-RADS® range, it reflects a different and clinically important finding: a higher proportion of abnormal interpretations in this practice correspond to true malignancy. The BI-RADS® PPV1 benchmark was derived from BCSC community data across mixed risk strata; in a program with a high CDR, a higher PPV1 is an expected mathematical consequence, not a quality deficiency. This finding should be documented in QA committee review and interpreted alongside CDR rather than in isolation.

Sensitivity at 87.5%

Based on 8 total cancers (7 screen-detected, 1 interval), this metric meets the ≥80% benchmark. At this volume, the confidence interval is wide. Trend analysis across periods is more informative than single-period point estimates at this sample size.

Specificity at 87.2%

Falls within the BI-RADS® 85–90% range and above the BCSC community-observed value of 83%, indicating the false-positive burden on non-cancer patients is within expected norms for a high-risk screening program.

Diagnostic Stream Reference Values (Constructed, n = 84)

KPICalculated ValuePublished Reference ValueStatus vs. Reference
PPV222.2%20% – 40%Within Reference Range
PPV327.3%25% – 45%Within Reference Range
Sensitivity85.7%≥ 80%Within Reference Range
Specificity72.7%85% – 90%↓ Below Reference Range

Diagnostic Specificity at 72.7% — Below Reference Range: A diagnostic specificity below 85% indicates a higher-than-reference rate of biopsy recommendations not yielding cancer. At n = 84, this metric is statistically unstable and should be interpreted cautiously. Factors to investigate include the proportion of short-interval follow-up examinations within the diagnostic pool, whether second-look ultrasound correlation routinely precedes biopsy recommendation, and the pre-test probability characteristics of the referral mix. This finding identifies a parameter for targeted QA review, not a conclusion about interpretive performance.

Sample Documents

The following sample documents correspond to the Anywhere Breast Imaging Practice illustrative example above. Download to see what the methodology overview and completed screening audit report look like in practice. All data, facility names, and physician identifiers are constructed.

Breast MRI MOA — Methodology Overview

Anywhere Breast Imaging Practice · Sample methodology overview document

Sample

Sample document — illustrative purposes only. All data, facility names, and physician identifiers are constructed and do not represent any real patient, facility, or physician.

Having trouble viewing this sample? Check back later or contact Mammologix using the link below.

Breast MRI MOA — Screening Audit 2025

Anywhere Breast Imaging Practice · Sample completed screening audit report, January–December 2025

Sample

Sample document — illustrative purposes only. All data, facility names, and physician identifiers are constructed and do not represent any real patient, facility, or physician.

Having trouble viewing this sample? Check back later or contact Mammologix using the link below.

Why Statistical Interpretation Context Is Non-Optional

The breast MRI benchmark evidence base is comparatively thin. The only large-scale community screening MRI benchmark data come from a single BCSC publication covering fewer than 9,000 examinations. [2] Individual facility volumes are frequently far smaller, particularly in programs that correctly separate indication streams.

A screening program with 17 examinations and zero screen-detected cancers will report a CDR of 0.0 per 1,000, a sensitivity of 0%, and a PPV1 of 0%. None of these values indicate program failure. They indicate that the program has not yet accumulated sufficient volume to detect the expected number of cancers given the underlying prevalence and benchmark CDR. Approximately 500–1,000 screening MRI examinations are required before these metrics begin to stabilize to the point where benchmark comparison is informative. [2, 5]

Every breast MRI audit report should include interpretation notes that:

  • 1Flag low-volume periods and identify which metrics are statistically unstable
  • 2Distinguish formal BI-RADS® benchmark ranges from BCSC community-observed values — and present both
  • 3Explain why screening and diagnostic results are reported separately and cannot be compared directly
  • 4Identify diagnostic reference ranges as published reference values, not formal BI-RADS® benchmarks
  • 5Cite the primary source for every reference threshold presented
  • 6Provide clinical context for any metric outside the acceptable or reference range

A Tiered Approach to Implementation

Not every breast MRI program is at the same stage of development. The three-stream audit architecture is most useful when understood as a tiered structure — one that a program can enter at the level appropriate to its current volume, indication mix, and data infrastructure.

Start Here

Stream 1 — Screening MRI

The logical starting point. Most established benchmark framework, most clinically homogeneous patient population, most directly interpretable performance metrics. Establish the screening stream first, ensure indication separation is captured in the tracking workflow, and allow sufficient volume to accumulate before drawing conclusions from individual-period metrics.

Next

Stream 2 — Diagnostic MRI

Requires the ability to distinguish diagnostic from screening examinations at the data level. The reference value evidence base is thinner, and interpretation requires familiarity with the lower specificity expectations of a symptomatic population and the labeling distinction between formal benchmarks and published reference values. Best activated once Stream 1 is producing stable results.

Emerging

Stream 3 — Preoperative MRI

The most recently formalized stream, with guidance now provided by the ACR BI-RADS® v2025 Manual. Facilities performing preoperative MRI should consider establishing this stream as v2025 benchmarks are published.

The governing principle: audit reporting should not proceed faster than the evidence base and the data infrastructure can support rigorous interpretation.

Glossary

AIR (Abnormal Interpretation Rate)
The percentage of examined patients with a positive (abnormal) imaging assessment. For screening MRI, positive = BI-RADS® 0, 3, 4, or 5. For diagnostic MRI, positive = BI-RADS® 4 or 5.
BI-RADS®
Breast Imaging Reporting and Data System. The ACR-developed standardized lexicon and reporting framework for breast imaging. The 5th Edition (2013) includes outcome monitoring guidance for all breast imaging modalities including MRI.
BCSC
Breast Cancer Surveillance Consortium. An NCI-supported network of breast imaging registries providing population-level performance data. The BCSC screening MRI benchmark study (Lee JM et al., 2017) is the primary source of community-observed performance values for screening breast MRI.
CDR (Cancer Detection Rate)
The number of cancers confirmed by tissue diagnosis per 1,000 examinations, identified as positive on the audited imaging examination.
FN (False Negative)
An examination interpreted as negative in which cancer is subsequently diagnosed within 12 months. In the screening context, also called an interval cancer.
FP (False Positive)
An examination interpreted as positive (abnormal) in which cancer is not confirmed on subsequent workup. Three variants: FP1 (recall level), FP2 (biopsy recommendation level), FP3 (biopsy performed level).
MQSA
Mammography Quality Standards Act (1992, as amended 2023). Federal legislation establishing quality standards for mammography in the United States. Does not apply to breast MRI.
PPV1
Positive Predictive Value 1. The percentage of abnormal interpretations (recall level) resulting in a tissue diagnosis of cancer within 12 months. TP ÷ (TP + FP1) × 100.
PPV2
Positive Predictive Value 2. The percentage of biopsy recommendations (BI-RADS® 4 or 5) resulting in a tissue diagnosis of cancer. TP ÷ (TP + FP2) × 100.
PPV3
Positive Predictive Value 3. The percentage of biopsies actually performed resulting in a tissue diagnosis of cancer. TP ÷ (TP + FP3) × 100.
Sensitivity
The percentage of true cancers correctly identified as positive. TP ÷ (TP + FN) × 100.
Specificity
The percentage of non-cancer examinations correctly identified as negative. TN ÷ (TN + FP) × 100.
TN (True Negative)
An examination correctly interpreted as negative — no cancer identified, none diagnosed within the outcome assignment window.
TP (True Positive)
An examination correctly interpreted as positive — cancer confirmed by tissue diagnosis within the outcome assignment window.

Conclusion

The breast MRI medical outcome audit is neither a simplified version of the mammography audit nor a complexity best deferred indefinitely. It is a structured quality assurance process that requires understanding of three distinct clinical indication streams, careful application of BI-RADS® audit methodology, an honest engagement with a comparatively thin benchmark evidence base, and a disciplined commitment to presenting both formal benchmark ranges and observed community performance values alongside every metric.

The regulatory picture is nuanced: no federal MQSA mandate exists for breast MRI auditing, but ACR-accredited breast MRI facilities carry an accreditation-level expectation for outcomes monitoring. Programs that understand that distinction — and design their audit methodology accordingly — are better positioned to produce results that are clinically meaningful and institutionally defensible.

Facilities that conduct breast MRI outcome audits with methodological rigor, present results with full statistical context, and maintain honest distinctions between formal benchmarks and published reference values provide the basis for meaningful quality improvement, meaningful physician feedback, and meaningful accountability to the patients their programs exist to serve.

References

  1. [1]Sickles EA, D’Orsi CJ. ACR BI-RADS® Follow-up and Outcome Monitoring. In: ACR BI-RADS® Atlas, Breast Imaging Reporting and Data System, 5th Edition. American College of Radiology; Reston, VA; 2013:21–31.
  2. [2]Lee JM, Ichikawa L, Valencia E, et al. Performance Benchmarks for Screening Breast MR Imaging in Community Practice. Radiology. 2017;285(1):44–52. doi:10.1148/radiol.2017162033
  3. [3]Niell BL, Gavenonis SC, Motazedi T, et al. Auditing a Breast MRI Practice: Performance Measures for Screening and Diagnostic Breast MRI. J Am Coll Radiol. 2014;11(9):883–889. doi:10.1016/j.jacr.2014.02.003
  4. [4]Lee CI, Ichikawa L, Rochelle MC, et al. Breast MRI BI-RADS® Assessments and Abnormal Interpretation Rates by Clinical Indication in US Community Practices. Acad Radiol. 2014;21(11):1370–1376. doi:10.1016/j.acra.2014.06.003
  5. [5]Lam DL, Lee JM. Breast Magnetic Resonance Imaging Audit: Pitfalls, Challenges, and Future Considerations. Radiol Clin North Am. 2021;59(1):57–65. doi:10.1016/j.rcl.2020.09.002
  6. [6]Cohen EO, Tso HH, Shin K, et al. Feasibility of Auditing Preoperative Breast MRI for Extent-of-Disease Evaluation Using the BI-RADS® v2025 Manual. Radiology. 2025;317(1):e243803. doi:10.1148/radiol.243803 Erratum in: Radiology. 2025;317(1):e259018.
  7. [7]Federal Register. Mammography Quality Standards Act — Final Rule. March 10, 2023; 88 FR 15126.
  8. [8]Strigel RM, Rollenhagen J, Burnside ES, et al. Screening Breast MRI Outcomes in Routine Clinical Practice: Comparison to BI-RADS® Benchmarks. Acad Radiol. 2017;24(4):411–417. doi:10.1016/j.acra.2016.10.014

All benchmark values and methodology descriptions are referenced to their primary peer-reviewed sources as cited above. Facilities should consult the ACR BI-RADS® Atlas, 5th Edition and applicable professional society guidance when designing or reviewing a breast MRI outcome audit program.

About the Author

RL
Richard “Rick” Lippert Jr.

B.S., ARRT · President & Founder, Mammologix · Breast Imaging Operations since 1995

A registered radiologic technologist and founder of Mammologix, Rick Lippert has spent more than 30 years in breast imaging operations — spanning mammography medical outcome audit, MQSA compliance support, patient follow-up communication, and the operational systems that help facilities maintain quality accountability.

View full bio →

ACR BI-RADS® Trademark Notice

BI-RADS® is a registered trademark of the American College of Radiology (ACR). The ACR BI-RADS® Atlas and all related benchmark values, assessment category definitions, and audit methodology guidance are the intellectual property of the American College of Radiology. All benchmark values and audit methodology references in this article are attributed to their original ACR and peer-reviewed sources. Mammologix is not affiliated with, endorsed by, or sponsored by the American College of Radiology. Reference to BI-RADS® is made solely for informational and educational purposes.

Disclaimer

The content, articles, calculators, tools, estimates, projections, derived data, reports, and other materials provided by Mammologix, LLC, powered by I/O Trak, Inc. (“Mammologix”) are provided for informational and educational purposes only.

The information and outputs provided by Mammologix are not intended to serve as, and should not be relied upon as, medical advice, diagnosis, treatment, clinical guidance, financial advice, investment advice, tax advice, legal advice, accounting advice, or any other professional advice.

Information provided by Mammologix may relate to breast health, mammography, imaging, screening, diagnostics, clinical outcomes, healthcare topics, financial calculations, estimates, projections, or other decision-support information. However, such information should not be used to make personal medical, clinical, financial, investment, business, insurance, legal, or other important decisions without consulting an appropriate qualified professional.

Always seek the advice of a licensed physician, radiologist, healthcare provider, financial advisor, accountant, attorney, or other qualified professional with any questions you may have about a medical condition, screening recommendation, imaging result, treatment option, financial matter, or other professional decision. Never disregard professional medical, financial, legal, or other advice, or delay seeking such advice, because of something you have read, calculated, or received through Mammologix.

Any calculators, formulas, estimates, projections, derived medical outcome data, financial calculations, or other tool-generated outputs are provided as estimates only. Results may be affected by user-entered information, assumptions, formula limitations, incomplete or inaccurate data, rounding, software errors, third-party data, changing medical guidelines, changing financial conditions, changes in law or regulation, or other factors. Mammologix does not represent, warrant, or guarantee that any calculation, output, article, estimate, projection, or derived data is accurate, complete, current, reliable, suitable, or appropriate for any particular purpose.

For medical or health-related content and calculators, Mammologix does not provide medical advice, diagnosis, treatment recommendations, diagnostic interpretation, or patient-specific clinical decision-making. Mammologix content and tools are not a substitute for an in-person evaluation, professional medical consultation, diagnostic interpretation, or individualized care from a licensed healthcare provider. In the event of a medical emergency, contact emergency services or a qualified healthcare provider immediately.

For financial, business, tax, investment, or accounting-related calculators or content, Mammologix does not provide financial, investment, tax, legal, accounting, or business advice. Any outputs are estimates only, and actual results may vary materially based on personal circumstances, market conditions, taxes, fees, interest rates, inflation, risk tolerance, regulatory changes, and other factors.

Use of Mammologix content, articles, calculators, tools, and outputs is at your own risk. You are solely responsible for reviewing, verifying, and interpreting any information or results provided and for obtaining appropriate professional advice before relying on or acting upon such information.

Mammologix assumes no liability for any injury, loss, claim, damage, or expense arising from or related to the use of, reliance on, or inability to use any Mammologix content, article, calculator, tool, estimate, projection, derived data, report, or website material. To the fullest extent permitted by law, Mammologix, its owners, officers, employees, contractors, affiliates, licensors, service providers, and agents disclaim all liability for any direct, indirect, incidental, consequential, special, exemplary, punitive, financial, medical, clinical, business, or other damages arising out of or relating to the use of Mammologix materials or outputs.

The information provided by Mammologix does not establish a doctor-patient, provider-patient, radiologist-patient, financial advisor-client, attorney-client, accountant-client, fiduciary, or other professional relationship.

AI Content Disclosure

Some Mammologix content, calculations, summaries, tools, or outputs may be created, edited, generated, or assisted by artificial intelligence tools or automated systems. While Mammologix may review content for accuracy and relevance, AI-assisted or automated materials may contain errors, omissions, inaccuracies, outdated information, or outputs that are incomplete or inappropriate for a particular user, patient, circumstance, or purpose.

Users should independently verify all AI-assisted content, calculator outputs, and automated results with appropriate qualified professionals before relying on them.