(and neither is “kind of” following it)
Healthcare data that is not completely and accurately reported, but is attested to as being so, always raises the question, “Should plan leadership have known this data was not reported appropriately?” Exactly what do you say to a regulator if you were the certifier, or if you were the one who supplied the data to be certified and it was not thoroughly vetted, therefore inaccurate: “Oops”?
Let’s be clear, self-assessment by health plans of data completeness and accuracy is explicit in 42 C.F.R. §422.504(I), Certification of Data that Determine Payment, and in 64 F.R. 61893-61900, OIG’s Compliance Program Guidance for Medicare+Choice Organizations Offering Coordinated Care Plans. Failure to validate completeness and accuracy of reported data is a significant compliance concern and can develop into a legal issue under the Civil False Claims Act (“FCA”), which prohibits knowingly presenting a false claim or knowingly making a false record or statement material to a false claim. The 2009 Fraud Enforcement and Recovery Act (“FERA”) expands FCA liability to include the “knowing” retention of overpayments of government funds, and the Affordable Care Act requires that overpayments be reported and repaid within 60 days after identification. Let’s face it, if you cannot demonstrate reasonable care by assuring before certifying reported data and that you acted when mistakes are discovered, “they’ve got you”.
So, how does plan leadership ensure reporting completeness and accuracy compliance? Babel Health suggests an informed, holistic approach that addresses the following questions:
– Are capitated provider submissions evaluated for reasonable volume?
– Are you studying your denied claim activity for things like “righteous” denials and overturned denials? Do you know the financial impact of an administratively denied claim versus unreported, high-risk diagnoses used for risk scoring?
– Do you have a way to discovered gaps in previously, but not currently reported diagnoses and a standard method to communicate that information to providers?
– Do you perform a comparison between reported expenses and encounter submissions? Can you account for all expenses (reportable, non-reportable)?
– Do you reconcile the records “input to” with the “output from” your encounter system? How much is left on the table and why?
– Were you able to get the record accepted for one program (RAPS), but not another (EDPS)?
– Are you able to see the status (accepted, rejected, pre-submission errors, awaiting submission, awaiting response, etc.) of all records at a glance in a dashboard?
– Are unresolvable errors being identified and researched for root cause? Are the root causes being addressed?
– Is rejection repair and re-submission part of routine processes or crisis managed when deadlines approach? What in your backlog to be reviewed and repaired at this moment?
– Are the rejections that matter most being resolved first? How are those being discovered and organized for priority attention?
– Are encounter submissions voided and replaced in a timely and accurate manner?
– How close are you to your predicted financial scores?
“Risk” will take on a whole new meaning if plan leadership does not ask these questions and ensure there are responsible team members equipped to perform and measure these activities. How do you stack up? Need any help? If so, give us a call and one of our subject matter experts will give you a hand.
– Deb Kircher, Vice President, Operations