MammologixSM
Beyond the Mammogram

In the Field

Communicate

The Report That Said Two Different Things

A medium-sized breast imaging facility with a single interpreting physician was managing a high-volume workflow. At the time, the facility was also dealing with issues involving its dictation system — the kind of operational friction that adds pressure to an already busy environment.

A group of mammography cases came through for interpretation. The physician dictated the reports. They were transcribed, signed, and released through the normal internal process. Nothing flagged. Nothing held. The reports moved forward.

When one of those reports reached the Mammologix inputting and coding team, the team didn't just enter the assessment code and move on. They read the report.

The body of the report described a screening mammogram with no suspicious findings. The narrative was clear — normal result, no acute findings, routine follow-up. But the imaging assessment assigned to that same report was suspicious. And the recommendation was for an immediate biopsy.

The report's narrative said the patient was fine. The assessment code said she needed a biopsy immediately. One of them was wrong — and the patient lay letter had not gone out yet.

Mammologix notified the client immediately. The referring physician was also notified. The facility reviewed the original dictation, identified the source of the mismatch, and issued an addendum correcting the record before any patient communication was generated.

Without that review, the patient would have received a lay result letter telling her she had a suspicious finding requiring urgent biopsy — based on a report whose actual clinical findings described nothing of the kind. The letter would have been factually inconsistent with the physician's interpretation. The patient, however, would have had no way of knowing that.

This case reflects something important about how Mammologix approaches the communication workflow. The review process is not a pass-through — it does not take an assessment code, match it to a template, and generate a letter. It looks at why the patient presented, what examination was performed, what the report body describes, what the impression states, what follow-up is recommended, and whether the assigned imaging assessment is consistent with all of that. Those elements are reviewed together, not in isolation.

A fully automated process — code in, letter out — would not have caught this. The inconsistency existed inside a single signed report. It was not detectable from the assessment code alone.

Patient communication is the last step in the clinical chain — and the step that reaches the patient directly. That's exactly the wrong place to find out something was wrong earlier in the process.