The Retrospective Risk Adjustment Mistake That’s Costing You Millions (And How to Fix It)

Most health plans approach retrospective risk adjustment the same way: wait until the year ends, pull all the charts, code everything, submit to CMS, and hope for the best. Then they wonder why their capture rates are disappointing and their audit results are terrifying.

The problem isn’t that retrospective risk adjustment doesn’t work. It’s that most organizations are doing it completely backward.

You’re Starting Too Late

The typical retrospective program doesn’t kick into gear until February or March of the following year. By then, you’re 14-15 months removed from the earliest encounters you’re reviewing. Providers don’t remember those patients. Documentation that made sense at the time is now ambiguous. When you need clarification, you’re asking providers to reconstruct clinical thinking from ancient history.

This delay costs you in two ways. First, your query response rate plummets. Providers are busy. Answering questions about patients they saw over a year ago isn’t a priority. Your queries sit in their inbox for weeks, and when they finally respond, the answers are often vague because they genuinely don’t remember.

Second, you lose the opportunity to improve documentation for current-year encounters. If Dr. Johnson has a pattern of inadequate CHF documentation, you want to address that in March, not December. Every month you wait, she’s creating more charts with the same documentation gaps.

The health plans that excel at retrospective risk adjustment start reviewing charts within 60-90 days of the encounter. Yes, that’s still retrospective because the encounter already happened. But the trail is warm enough that queries get real answers and documentation patterns can be corrected quickly.

You’re Targeting the Wrong Charts

Walk into most retrospective risk adjustment programs and they’ll tell you they prioritize high-risk patients. Members with multiple chronic conditions, high utilizers, patients who generated significant claims. This seems logical. These patients should have lots of HCC opportunities.

But this strategy misses half the value. The bigger opportunity isn’t finding more codes on obviously sick patients. It’s finding the gaps where your data indicates a condition exists but it’s not getting coded.

A patient who fills three COPD medications every month but has no COPD diagnoses on any claims is printing money. That’s not fraud. It’s usually just workflow breakdown. The condition is on the problem list. The provider mentions it in notes. But it never makes it onto a billable diagnosis code.

The best retrospective programs use predictive analytics to find these mismatches. Pharmacy data, lab results, and claims history all point to specific conditions. If those conditions aren’t showing up in your coded diagnoses, there’s revenue sitting on the table.

This requires more sophisticated data analysis than just sorting by member risk score. But it also produces much better results because you’re fishing where the fish actually are.

You’re Not Building Real Audit Defensibility

Ask most retrospective risk adjustment programs how they’re preparing for RADV audits, and they’ll say “we make sure documentation supports the codes.” That’s necessary but not sufficient.

Real audit defensibility requires preserving the evidence trail. When your coder assigns an HCC based on specific documentation in a chart, that link needs to be captured and preserved. Three years from now when CMS audits that member, you need to produce the exact evidence that justified the code within hours, not weeks.

Most retrospective programs don’t do this. They track that codes were assigned. They might note that QA reviewed certain charts. But they don’t systematically capture and preserve the connection between each code and its supporting documentation.

This creates chaos during audits. You’re scrambling to recreate the coder’s thinking from years ago. Sometimes the evidence is still there. Sometimes it’s not. Sometimes you find different documentation than what the original coder saw, and now you’re not sure if the code was ever properly supported.

Fix this by implementing systems that automatically link codes to evidence. When a coder accepts an HCC suggestion, the system should capture the specific MEAT criteria from the note. That connection stays intact in your audit repository, ready to produce on demand.

You’re Measuring Activity Instead of Outcomes

Most retrospective programs measure how many charts they reviewed, how many HCCs they identified, and how many got submitted. These are activity metrics. They tell you how busy your team was. They don’t tell you if you’re actually succeeding.

The outcome metrics that matter are capture rate change (are you finding conditions you would’ve missed?), coding accuracy (do your codes survive internal QA?), and audit defensibility (can you produce clean evidence for every code?).

I’ve seen retrospective programs that review 50,000 charts annually but have minimal impact on capture rates because they’re mostly confirming codes that were already being captured through other workflows. That’s expensive validation work, not value creation.

Flip this by measuring incremental value. Of the HCCs your retrospective program identified, how many were net new versus codes that would’ve been captured anyway? That’s your real contribution to revenue.

What Actually Works

Effective retrospective risk adjustment programs start reviewing charts within 90 days of encounters, use data analytics to find documentation-coding gaps (not just high-risk patients), build systematic audit defensibility into every coding decision, and measure outcomes instead of activity.

They also accept that retrospective is a supplement to prospective and concurrent work, not a replacement. The best programs use retrospective to catch what other processes missed and to identify systematic problems that can be fixed going forward.

If your retrospective program feels like an archaeological dig through ancient charts producing disappointing results, you’re doing it wrong. Start earlier, target smarter, preserve evidence systematically, and measure what actually matters. The revenue and compliance improvements will follow.

Related Post