What we learned in 2017 - MIPS performance year one
Posted by Blake McWilliams on 2018-02-07
The first performance year of the Merit-Based Incentive Performance System (MIPS) has been a learning experience for all those who have participated in it. Now that 2017 has ended and we are into the 2018 MIPS performance year, we have taken a step back and asked ourselves what are the takeaways? Are there any patterns that emerged that can inform us moving forward?
Throughout 2017, we carefully listened to our clients with the intent of discovering their questions and concerns regarding MIPS. The complexity and scale of the system generated significant confusion in both small and large clinical settings. And while this could simply be attributed to the novelty of unfamiliar rules, many of our clients were already wrestling with more sophisticated and targeted issues, including:
“I don’t know which measures give me the best opportunity of being successful.” “I looked at these measures and I don’t know what’s useful to my physicians.” “I don’t very well know a way to collect this data and submit it and it would be great to streamline the process.”
We thought about these issues and realized they revealed a larger question: “What does success under MIPS look like”? How does an organization define success that is in alignment with their goals as healthcare providers?
In the beginning of 2017, it is arguable many would have said receiving a positive reimbursement adjustment is what would constitute success under MIPS. While not especially useful for the gathering of specialty-specific data, the collection of benchmarkable information from CMS’ own cross-cutting general quality measures, offered the possibility of a higher score. But as we touched upon in an earlier post, achieving quality performance scores that will secure clinicians a positive reimbursement adjustment will be more challenging going forward. Understanding this systemic element can help avoid pursuing a MIPS strategy that devotes resources to chase ever diminishing (or even unattainable) rewards.
In addition to a provider’s MIPS quality performance score, performance year one also revealed that CMS’ computation of the MIPS Clinical Practice Improvement Activities (CPIA) score could be accomplished simply through the digital collection of PROM data through a Qualified Clinical Data Registry (QCDR). This action by itself ensured a provider would avoid any negative reimbursement adjustment for 2017.
As performance year one advanced, we detected a shift in our OBERD community of users’ definition of success, expressed by the desire by many to collect data more relevant to the patient’s care as well as to the provider’s practice speciality area. With these insights in mind, OBERD, as a QCDR wrote new specialty-oriented measures to capture this more meaningful data. These measures, approved by CMS for performance year two (2018), are specifically designed to leverage familiar outcome forms already in common use, such as the KOOS, HOOS and ODI.
By the end of 2017, we saw two basic strategic paths for navigating MIPS starting to emerge depending on whether achieving a positive adjustment or simply avoiding a negative adjustment was the goal. The opportunity to weave together both a PROM and a MIPS strategy emerged to offer an intriguing way forward. We see the reframing of MIPS from a compliance necessity and to a catalyst for onboarding a PROM platform, collecting a uniform set of specific data for multiples purposes, and reporting through a CMS-recognized QCDR.
In a forthcoming post, we’ll explore each of these strategic pathways, and offer our recommendations.