Features

Exploring Patient, Provider Perceptions of Healthcare AI

Advances in healthcare artificial intelligence have received significant hype, but patients and providers are still cautious of these new technologies.

Source: Getty Images

- Artificial intelligence (AI) has been the subject of intense interest and scrutiny within the medical community in recent years, boasting significant pros and cons that continue to fuel debate in the healthcare sector.

The Research and Markets Thematic Intelligence: AI in Clinical Practice - 2023 report indicates that AI is becoming embedded in healthcare. But as with any new technology, realizing the full potential of health AI depends on securing buy-in and uptake from stakeholders.

Patients and providers are two of the major groups impacted by AI’s foray into healthcare, and their perceptions of and trust in these technologies are critical to making health AI both viable and valuable. However, multiple perspectives and concerns are present in the current literature surrounding healthcare AI solutions.

Here, HealthITAnalytics will explore how patients, providers, and others perceive AI in healthcare while investigating how to build the necessary trust in these tools.

PATIENTS' POINT OF VIEW

Naturally, patients have a significant stake in guiding the development and implementation of AI, as many of these solutions are developed for use as clinical decision support systems or diagnostic tools.

A recent Morning Consult survey found that approximately 70 percent of adults in the United States have concerns about the increased use of AI in healthcare, which vary by age group. Roughly 77 percent of Baby Boomers and 70 percent of Gen X respondents reported concerns, compared to 63 percent of Millennials and Gen Z respondents, respectively.

Comfort levels also appear to depend on the type of task the health AI is performing, according to research conducted by SurveyMonkey and Outbreaks Near Me, a collaboration between epidemiologists from Boston Children's Hospital and Harvard Medical School.

The survey found that 32 percent of US adults would be comfortable with AI leading a primary care appointment, but only 25 percent reported being comfortable with AI-led therapy.

Overall, the consensus points to the general population preferring that important healthcare tasks be led by a medical professional rather than an AI, with 84 percent preferring a provider for prescribing pain medication, 80 percent for diagnosing a rash on the arm, 71 percent for reading a scan, and 69 percent for managing a patient’s diet.

Perceptions of health AI can be further characterized based on medical specialty, as demonstrated by studies exploring the perspectives of parents and children in pediatric care.

One May 2022 study published in Academic Pediatrics showed that areas of greatest concern for parents regarding computer-assisted healthcare for their children during an emergency department visit were diagnostic errors and incorrect treatment recommendations. Discomfort with AI was greater among Black non-Hispanic parents and younger parents.

The greatest perceived benefits of computer-assisted healthcare were obtaining a rapid diagnosis and catching something a human clinician may miss. The majority of parents also reported being comfortable with the technology’s use in a handful of clinical scenarios: 77.6 percent for determining the need for antibiotics, 77.5 percent for radiograph interpretation, and 76.5 percent for bloodwork.

Research evaluating the attitudes of children and youth about the use of healthcare AI found that they generally have a positive perception of the technology. Respondents appeared to possess significant knowledge of AI and a willingness to engage with it, expressing a strong desire for inclusion and involvement in AI research and deployment.

In general, across demographics and medical specialties, patients appear to believe that AI will improve healthcare long-term. A study published last year in JAMA Network Open showed that over half of patients think that the technology will make healthcare at least somewhat better.

Regardless of whether patients believe that AI will improve healthcare or feel comfortable with its use in certain clinical applications, one trend is consistent: when AI is used in healthcare, patients want it to assist rather than replace clinicians. This is demonstrated by recent research at the Washington University in St. Louis examining the integration of machine learning (ML) in medical diagnostics.

PROVIDER AND MEDICAL STUDENT PERSPECTIVES

Providers’ perspectives on health AI are critical, as care teams serve as the main touchpoint for patients navigating their care journeys. Interactions with their providers significantly impact patient trust, and whether that trust is maintained can affect patient outcomes.

If providers do not have positive perceptions or trust health AI tools, patients are unlikely to feel comfortable with them. Further, if clinicians do not believe that AI improves care delivery, they are unlikely to support its deployment and use, limiting the potential utility of these technologies in healthcare.

Acceptance of AI requires balancing or addressing multiple factors, according to an npj Digital Medicine study published in June. Researchers found that perceived loss of professional autonomy and difficulties related to integrating AI in clinical workflows were reported to be the two most prominent factors hindering AI acceptance by clinicians working in hospitals.

However, including end-users in the early stages of an AI tool’s development and providing appropriate training regarding the tool’s use were found to facilitate AI acceptance.

These concerns around healthcare AI are especially prescient for two groups: radiologists and medical students.

Research published this year in Academic Radiology evaluated the AI perceptions of medical students, radiology trainees, and radiologists. The study found that 22 percent of participants were less likely to choose radiology as a career due to concerns about advances in AI.

Medical students were more concerned about these advances than radiology trainees and radiologists, who demonstrated higher basic AI knowledge. Medical students were also more concerned about the potential threat of AI to the radiology job market.

However, 79 percent of the cohort reported that "AI will revolutionize radiology in the future."

These concerns about health AI were echoed in a June 2022 study, which revealed that medical students and interns expect AI to impact their specialty choice and want medical school curricula to include training on AI competencies.

Experts writing in medRxiv found that medical students are interested in the opportunities that health AI may bring but want curricular development to focus on basic AI knowledge, including information on potential applications, reliability, risks, and ethics.

The concerns about AI in specialties like radiology may also contribute to recruitment challenges, per a September 2022 study. The study revealed that 23 percent of medical students would not consider specializing in diagnostic radiology because they believe that AI and other emerging technologies would make the specialty obsolete.

Researchers writing in Current Problems in Diagnostic Radiology earlier this year reject this assumption, however, arguing that recruiting medical students to radiology has become challenging because of misinformation related to AI’s implementation in the field.

The authors instead posited that AI may revolutionize radiology by supporting specialists and advancing the field in the future.

They further noted that a rapidly aging population increases the need for both imaging and radiologists, underscoring the importance of medical student recruitment. Addressing misconceptions about AI, detailing its potential applications, and highlighting radiologists’ role in AI development is key to improving recruitment, they stated.

Research published last year in Clinical Imaging supports these assertions, finding that introducing radiology residents to AI-based decision support systems (AI-DSS) within clinical workflows can improve their perception of these tools and their utility.

BUILDING TRUST IN HEALTHCARE AI

A key aspect impacting patients' and providers' views of healthcare AI is trust. Establishing this trust can be extremely difficult, but experts across the healthcare industry are working to drive progress in this area.

Researchers writing in BMC Medical Ethics suggest that the role of trust in, and the trustworthiness of, AI must be tackled as the tools become more ubiquitous in clinical settings. The authors asserted that confidence-worthy processes must be devised to help build confidence-worthy AI.

Specifically, they suggest that any advanced technology must be leveraged within the context of a trust relationship. In healthcare, this framework looks like patients trusting clinicians who use AI, which can only follow from clinicians trusting healthcare AI developers enough to feel comfortable deploying the technology.

The researchers also noted that there needs to be trust in the AI tool itself or the knowledge that it provides.

There are many ways to help build this trust, but also many ways to erode it. For instance, concerns about patient privacy and provider liability are significant hurdles to establishing trust.

A January 2021 study highlighted the ongoing struggle to determine liability for medical malpractice cases that involve the use of healthcare AI. In it, the researchers found that potential jurors may not be opposed to clinicians’ acceptance of medical recommendations provided by AI. This indicates that providers could be less liable for malpractice, complicating an already contentious issue.

Patient privacy is also a significant barrier to healthcare AI development, as concerns about tech companies’ use of medical data and lack of privacy protections rise.

A special report published in 2020 in Radiology argued that clinical data should be considered a “public good” when leveraged for medical research or health AI development. But it also noted that such a framework requires that those data and patient privacy be protected.

The role of tech companies in the development of healthcare AI has also been called into question, with experts asking for more transparency and guardrails.

Last month, US Senator Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, wrote a letter to Google CEO Sundar Pichai regarding concerns about the deployment of Med-PaLM 2, the company’s large language model (LLM) for use in healthcare.

The letter cites transparency, patient privacy protections, and ethical guardrails as concerns around the model’s use, alleging that Google and other companies are racing to develop healthcare AI in an effort to capture market shares while disregarding many of the potential risks.

Despite these concerns, stakeholders are working to develop frameworks for the ethical and responsible use of AI across industries, including healthcare. Two of these frameworks are the Coalition for Health AI's (CHAI) ‘Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare’ and the White House Blueprint for an AI Bill of Rights, which stakeholders indicated may significantly impact healthcare and health AI regulation.

Experts recommend an evidence-based AI development and deployment approach to help standardize AI practices and tackle systemic issues associated with health AI. They also argue that responsible AI deployment requires collaboration among stakeholders to help establish best practices and develop industry standards.