Tools & Strategies News

Researchers Call for ‘Reimbursement Framework’ for Healthcare AI

Researchers argue that the responsible adoption of artificial intelligence in healthcare requires financial incentives be implemented to ensure high quality care and health equity, and to mitigate potential bias.

four paperclips arranged in a plus shape on a gray background. the paperclips, clockwise from the top, are yellow, pink, green, and blue.

Source: Getty Images

By Shania Kennedy

- A paper published in npj Digital Medicine earlier this month outlines a potential “reimbursement framework” for the adoption of healthcare artificial intelligence (AI), which researchers argue will ensure that healthcare organizations are incentivized financially at a sustainable level to support quality of care, healthcare equity, and mitigation of potential biases.

In the paper, the researchers state that AI designed and deployed to promote safety and efficacy in healthcare is poised to address increases in costs, lack of access to care, and advance health equity. This is particularly true for AI systems regulated by the Food and Drug Administration (FDA), which have demonstrated a positive impact in care settings where they are used.

When choosing whether to adopt and deploy an AI tool, healthcare providers are greatly influenced by financial incentives for services that the AI would assist with, the authors state. In particular, reimbursement and insurance coverage are critical determinants of AI adoption.

Currently, the Centers for Medicare and Medicaid Services (CMS) have established a national payment plan for one FDA-authorized autonomous AI system. The agency has also created a national add-on payment for assistive AI under the New Technology Add-on Payments (NTAP) framework.

The researchers state that NTAP has led to AI payments by CMS, but it is severely limited because it is technology-specific with a complicated approval pathway and only covers services provided to inpatients, among other factors. As a result, other researchers have attempted to develop additional healthcare AI frameworks, but the paper’s authors claim that these frameworks do not consider existing US healthcare coverage and reimbursement systems, which are often complex. Further, they ignore the role of affected stakeholders, including patients, providers, legislators, payers, and AI developers.

As an alternative, the authors propose their own framework, which is designed to be transparent, maximize alignment with ethical frameworks for healthcare AI, allow more optimal alignment between the ethical, equity, workflow, cost, and value perspectives for AI services, enhance support from affected stakeholders, and map onto the existing payment and coverage systems in the US. These are essential to ensure quality of care, healthcare equity, and mitigation of potential bias in organizations adopting AI, the authors argue.

To show how their framework can be mapped onto existing reimbursement, regulatory, and care processes, the researchers applied it to a case study for an autonomous AI system. The AI has been widely adopted in clinical settings and is used to diagnose diabetic retinopathy and diabetic macular edema without human oversight.

The authors state that the tool’s creator charges $55 per patient for this AI service, the AI work, and to obtain AI inputs and outputs. In doing so, the researchers argue, the creator uses “access maximizing value,” which is a conceptualization of value the authors use in their framework designed to decrease expenditure per patient while incentivizing service access.

For example, the creator could have chosen a more expensive retinal camera hardware, which might allow for higher accuracy and diagnosability at a lower expense for the diagnostic AI algorithms, but this would increase the cost per patient. Alternatively, a less expensive camera would require more sophisticated, expensive AI algorithms, but the creator can still choose to charge $55 because AI algorithms are scalable, allowing the creator to potentially spread the extra costs to more patients.

Under their framework, the authors also note that multiple “guardrails” and reimbursement measures overseen by stakeholders can help enforce ethical principles in the AI’s implementation and use. For example, the FDA authorized the device, the American Diabetes Association cited the device’s safety and efficacy in its Standards of Medical Care for Diabetes, CMS established a national payment plan for the device, and commercial insurance providers used the CMS rate, among other factors, to set their own payment amounts.

The authors state that these decisions, made by each stakeholder, resulted in a balance between the ethical, workflow, cost, and value considerations for AI in healthcare.

Similar analyses using their framework can guide the development of future AI services and tools, according to the researchers.