Mayo Clinic's AI Catches Pancreatic Cancer 16 Months Before Doctors — From Routine CT Scans Already in the EHR
Mayo Clinic's REDMOD AI — published in Gut, validated on ~2,000 multi-institution CT scans — detects pancreatic cancer up to three years before clinical diagnosis (median 16 months). It catches 73% of cases vs 39% for specialist radiologists, on scans already in the EHR.
Mayo Clinic researchers have published validation results for an AI model called REDMOD — the Radiomics-based Early Detection Model — that detects pancreatic cancer on routine abdominal CT scans up to three years before clinical diagnosis. Published in the journal Gut in April 2026, the validation study used nearly 2,000 CT scans from multiple institutions, imaging systems and protocols. The model identified 73% of cancer cases, compared with 39% detected by specialist radiologists, with a median lead time of approximately 16 months (mean ~475 days) before the cancer would have been diagnosed under standard care.
How REDMOD actually works
REDMOD takes routine abdominal CT scans — the kind already obtained for unrelated reasons (kidney stones, abdominal pain, pre-surgical workup) — and analyzes hundreds of quantitative imaging features describing tissue texture, structure and density patterns. These radiomic features capture faint biological changes in pancreatic tissue that begin developing well before any visible mass forms — changes that are below the threshold of human visual detection. The model is trained to recognize the radiomic signature of pancreas tissue that will progress to cancer, allowing it to flag elevated risk in scans where the human-readable imaging is, by every conventional measure, normal.
Why pancreatic cancer is the right disease to attack first
Three reasons pancreatic cancer is uniquely suited to early-detection AI. One: it has the worst survival statistics of any common cancer — five-year survival is roughly 12% — almost entirely because it's typically detected late, when surgical resection is no longer an option. Earlier detection translates almost directly into survival gain. Two: there is no widely-deployed screening test (the way mammography or colonoscopy work for breast and colorectal cancer), meaning incidental detection on scans done for other reasons is the realistic near-term path to early diagnosis. Three: the at-risk population — patients with new-onset diabetes, chronic pancreatitis, BRCA mutations, or strong family history — already gets imaging frequently for other indications, meaning a model that runs on existing scans hits the right population without requiring new screening infrastructure.
Why this validation matters more than the typical AI-medical headline
Most AI-medical-imaging studies fail to translate to clinical practice for one of three reasons: they were trained and tested on the same hospital's data (overfitting to that institution's scanners and protocols), they require manual preprocessing that pushes adoption costs into the unaffordable range, or they show small absolute improvements that don't justify the workflow disruption. REDMOD's validation was deliberately designed to address all three. The nearly 2,000 scans came from multiple institutions, using different imaging systems and protocols, mirroring real-world clinical heterogeneity. The model runs automatically and requires no time-intensive manual preparation. And the 73% vs 39% detection rate is a roughly two-fold absolute improvement — well above any reasonable threshold for clinical relevance. Researchers are now advancing REDMOD into clinical deployment through the AI-PACED study, evaluating real-world clinician integration.
BlockAI News' View
Two angles that aren't getting their due. First: this is an early example of AI extracting unrealized value from existing healthcare data — the CT scans REDMOD analyzes have already been taken, paid for, stored, and read by humans. The model creates new clinical value entirely from data already sitting in EHRs. The downstream implications are larger than pancreatic cancer: every routine imaging study, lab panel, and pathology slide ever taken is potentially a substrate for AI-driven incidental detection of conditions that weren't being looked for. Second: the regulatory pathway for tools like REDMOD is the binding bottleneck on near-term adoption — FDA Software-as-a-Medical-Device (SaMD) clearance, hospital IT integration with EHRs and PACS systems, and the medical-malpractice question of "if the AI flagged it and the radiologist missed it, who's responsible" all have to be answered before deployment scales. Mayo's institutional weight makes the FDA clearance pathway easier than it would be for a startup, but the malpractice question hasn't been litigated. Watch for the AI-PACED study readout and the FDA filing in 2026-2027.
Want every AI × Web3 signal the moment it breaks? Subscribe to the BlockAI News daily brief.