Artificial intelligence and machine learning are quickly overhauling the processes of researching, purchasing, and implemented IT tools in the healthcare industry.
With new breakthroughs announced almost every day and thousands of companies competing for a piece of a spectacularly lucrative market, healthcare organizations have their hands full keeping up with the latest offerings and cutting-edge tools.
Vendor selection is a challenging component of any health IT project, and it can be even more difficult in a marketplace full of hype, hope, and big ideas. But integrating artificial intelligence into a service line or workflow demands more than just a competitive procurement process.
At the moment, at least, artificial intelligence isn’t something that can be passively infused into an organization like a teabag into a cup of hot water. AI must be deployed carefully, piece by piece, in a measured and measurable way.
Organizations must create a comprehensive plan for how, when, and why to add AI to existing clinical or operational pathways, and must carefully monitor the short-term and long-term impacts of doing so – especially if patient safety comes into question.
What are the key components of developing a roadmap for a healthcare artificial intelligence project, and how can provider organizations avoid some of the common pitfalls of adopting technologies that are still finding their feet in a rapidly changing environment?
Identify a defined use case with a need for improvement
Starting small can be a good thing when it comes to AI. Skepticism over the maturity and value of machine learning tools is still commonplace, so organizational champions will likely need to prove their point with a pilot or well-defined program before broader achieving acceptance.
While overarching governance and a sense of how to bring new tools onto the main stage is essential for creating any new initiative, deploying a new technology in a limited environment can also give organizations the chance to experiment, iterate, and solve problems before affecting operations on a larger scale.
Defining a use case can help organizations understand exactly what data they need – and exactly what the resulting capabilities will be.
Clear expectations about the scope and scale of output will help prevent a subsequent rush on the analytics department for reports and functions that may not actually be available, says Soyal Momin, Vice President of Data and Analytics at Presbyterian Healthcare Services.
“It’s very important to understand the parameters of what you’re doing, because as soon as you open up the candy shop, everyone is going to come to you and say, ‘I need this and this and that and this other thing on the side,’” he said.
“If you don’t classify your opportunities into ‘crawl, walk, and run,’ you’re going to get lost in those one-off demands and you’re not going to be systematic in the way you develop your programs.”
There are use cases aplenty for machine learning and AI, from the worlds of clinical decision support and fraud detection to patient engagement and workflow optimization.
Whether an organization is looking to develop its own AI algorithms or adopt a pre-fabricated vendor product, some promising areas of exploration include:
Imaging analytics in radiology and pathology
AI-driven imaging analytics has taken off like a rocket due to advances in pattern recognition and pixel-by-pixel examination of high resolution pathology slides and imaging studies.
One of the first clinical decision support algorithms cleared by the FDA is a radiology support tool that identifies distal radius fractures, a common wrist injury.
Using the tool could help radiologists move more quickly through low-complexity cases and focus their time and effort on more challenging patients in need of high-level expertise.
Pathology is a similarly promising area for machine learning to shine. AI can identify features of interest in slides that may be too small for a human to assess, helping identify concerns more quickly and accurately than before.
EHR workflow optimization
Physician burnout is a top concern for healthcare organizations of all types and sizes, and the electronic health record (EHR) is often to blame for frustration and wasted time. Using artificial intelligence to automate administrative tasks, add valuable features to EHRs, and simplify workflows is a high priority for vendors.
If deployed correctly, natural language processing tools may be able to enhance voice-to-text dictation, summarize documents, and pull out relevant data elements from free-text documentation for analytics purposes.
Medication safety and reconciliation
Medication safety is a key concern for patients and providers. As the majority of prescribing shifts to online systems, data about medications is becoming increasingly available for analytics.
Machine learning tools could help to catch dosage errors, identify potential negative interactions, limit the potential for opioid abuse, and ensure that the right patients are receiving the right treatments for high-risk conditions like hospital-acquired infections.
Predictive analytics and risk scoring
Population health management is an overarching goal for most healthcare organizations, and AI is increasingly becoming an important tool for stratifying risk, identifying hotspots in chronic disease, and accounting for the social determinants of health.
Natural language processing tools could help to flag socioeconomic terms hidden in free text documentation, while machine learning can aid in the development of comprehensive risk scores by proactively identifying trends in lab results, diagnoses, or other clinical and social data.
Precision medicine and clinical trial matching
Pharmaceutical companies are especially excited about the prospect of integrating AI into the processes of drug discovery, treatment pathway guidance, and clinical trial matching. Reducing the time, expense, and failure rate of bringing new drugs to market could significantly improve the way oncologists, neurologists, and other specialists work with patients on treatment options.
Patient engagement and management
From chatbots to ambient computing to consumer profiling, the use cases for patient-facing AI are largely focused on improving engagement and reducing non-adherence to chronic disease management tasks.
Using machine learning, healthcare organizations can create personalized interactions that meet patients where they are on their health journeys, allowing providers to both collect and send key data to improve the long-term management process.
Organizations could even consider using NLP and machine learning to comb through CAHPS data and identify pain points in patient responses to inform future improvement plans, a recent report suggested.
Fraud detection and financial risk analytics
AI is likely to have a major impact on the financial side of care, and reducing waste, preventing fraud, or optimizing workforce management are all attractive prospects for early adopters.
Pattern recognition strategies work well on transactional data, and machine learning tools could flag abnormalities in a specific provider’s claims patterns, identify opportunities to reduce duplicative or unnecessary services like lab tests and expensive imaging studies, allocate staff hours based on expected peak demand, or schedule operating room time most efficiently.
Healthcare organizations can’t be too careful about their cybersecurity in the era of massive data breaches and crippling ransomware attacks. Machine learning tools can help to get ahead of hackers by quickly identifying suspect activities in complex infrastructure systems.
With new threats evolving at an incredibly fast pace, using AI to detect attacks that may look different than any previous attempt to break into a system could help avoid exposing key patient data and damaging consumer trust.
Assess where your organization lies on the maturity curve
Artificial intelligence is fairly new for everyone, but the strategies involved in success have already been tried and tested in organizations deploying more traditional big data analytics programs.
Organizations that do not have a core set of competencies in place to maximize the value of other types of data analytics technologies will struggle to realize a return on their investment in AI, as well, cautioned Steve Griffiths, PhD, Senior Vice President and Chief Operating Officer of Optum Enterprise Analytics.
“Everyone is on their own maturity curve around analytics, AI, and innovation,” he said. “It’s important to know exactly where you stand before you start moving forward with something new.”
“I like to compare it to Maslow’s hierarchy of needs: there are some basic skills you need to have before you can start to self-actualize.”
Organizations need to have a firm grasp on descriptive analytics and feel confident that they understand the populations they’re serving, their financial risk portfolios, and the human and technical resources at their disposal before tackling advanced data-driven projects, he advised.
“Many organizations don’t have the base layers in place before trying to jump three or four rungs up the ladder,” he observed. “If you don’t optimize your analytics specifically to drive your strategic initiatives, you are going to have a difficult time getting to the ROI you’re looking for.”
Momin agrees that overall maturity and an ongoing commitment to growing as an organization are just as important as choosing a good vendor for seeing impressive results.
“A lot of organizations think that you bring in a technology vendor, you implement a data warehouse, and voila – you’re done,” he said. “But if you don’t think about everything that goes along with it – training your people, engaging them to use the tools correctly, optimizing the workflows – then how are you going to integrate the technology into your organization?”
“Change management is observing, measuring, and making appropriate changes to all of those aspects of a deployment. Without that, you’re not going to generate any type of long-term value.”
To move up the maturity ladder, healthcare organizations should consider:
- Securing support from the very highest levels of leadership, including clinical, financial, and executive decision-makers in the C-suite
- Tapping clinical and technical champions in key roles across multiple departments to generate buy-in, take suggestions, answer questions, and explain changes in workflow or other processes
- Paying close attention to interoperability, data governance, and data integrity to ensure that all data involved in artificial intelligence projects is clean, accurate, up-to-date, and trustworthy
- Investing in self-service analytics tools and strategies that allow service line owners to engage with data in a visually-appealing, simple-to-manage manner
- Communicating clearly, comprehensively, and often with stakeholders across the organization, including IT leaders, health information management professionals, clinical stakeholders, and patient advocates to ensure that data is doing its job to improve the organization overall
- Supporting a culture of accountability, flexibility, and innovation across all departments to promote data-driven decision-making as a core competency for every member of the team
Carefully evaluate vendors for transparency and results
Artificial intelligence is undeniably cool, and marketing teams across all technology sectors know it.
It is easy to get caught up in the hype of what the technology is and pay less attention to what it actually does, says Steve Lefar, CEO of Applied Pathways.
“Many of my fellow CEOs are concerned that all the hype hurts everyone’s cause,” he said. “The risks of overstating your abilities in the healthcare industry are huge. The implications are literally life and death. No one benefits from promising too much to consumers and being unable to measure up.”
Sameer Badlani, MD, FACP, CHIO at Sutter Health, also believes that organizations should tread carefully around vendors that tout the use of AI instead of the results the software can achieve.
“When we first started talking to our vendor, the CEO never spent more than two or three sentences describing the technology,” he said.
“That wasn’t the selling point. His pitch was solving the problem, and I think that often gets missed in the hype about AI and machine learning.”
Getting too caught up in an AI label can bring a number of different problems for providers, including unsolved questions around liability and legal responsibility if something goes wrong.
By its very nature, artificial intelligence is generally too complex for laypeople to understand. Current AI efforts are geared towards synthesizing enormous volumes of data more quickly and comprehensibly than an organic brain could manage, meaning the answers that pop out of the algorithm might not be easy to verify manually.
“Black box” AI tools complicate the issue by presenting recommendations without making their analytics processes as clear as possible to informed end-users. Black box tools may obfuscate their data sources, hide their metrics and definitions, or fail to provide context for their results.
Potential customers should consider a purposeful lack of transparency a bug, not a feature – and vendors should not try to rationalize clear lapses in transparency that could be fixed.
This is especially true if the tools are intended to touch patients in any way, stressed Lefar.
“There isn’t a single doctor or nurse that will accept a recommendation without knowing all the rationale behind it – let alone a lawyer,” he said. “The legal ramifications and compliance issues of having a machine make a decision haven’t been worked out yet. We’re not prepared for it.”
Organizations may also be unprepared to get as much value out of AI tools as they expect without carefully evaluating every component of the project, added Griffiths.
“We do see that some organizations don’t get as much out of an AI purchase as they would hope,” Griffiths said to HealthITAnalytics.com. “Many times, that is because they haven’t linked the product or initiative to their overall business strategy, so they don’t have a change management program around the new approach.”
“Other times, it’s because they don’t assess their needs and goals at the beginning – sometimes that’s because they feel like AI will simply come in and fix all their problems. That isn’t how AI works. It is not a panacea. If you treat it that way, you are going to underutilize your investment and be disappointed in the results.”
Griffiths suggests carefully evaluating what wrap-around services can be included in a vendor contract to ensure long-term, comprehensive support, even for a limited deployment.
These value-add services may include on-demand customer service, an implementation expert on site during go-live, or the availability of “as-a-service” analytics or cloud storage opportunities to supplement on premise capabilities.
Identify metrics to gauge success or failure
Measurement and management must go hand-in-hand for a successful health IT deployment, especially if an emerging strategy like machine learning is involved.
The specific focus of the project will determine what metrics are most applicable for identifying improvements or shortfalls, but organizations should strive to include a mix of process measures and outcomes measures to fully assess the impact of a new tool.
“AI with an ROI is the goal,” stated Griffiths. “Creating a new algorithm or deploying a new tools is great, but you don’t want to do something just to say you’ve done it – this is especially true for machine learning or AI.”
“Before you even begin, you have to think about how this project is going to drive value, ultimately, to your business, to your consumers, and to your staff.”
For clinical projects or patient experience initiatives, organizations may consider developing metrics that help to answer the following questions:
Does this improve the patient experience?
Do patients notice a difference in their care? Are they satisfied with the way their appointments are conducted? Are they pleased with more frequent digital interactions with their providers, or do they find enhanced communications annoying or unhelpful? Do they feel as if their providers are more informed or more able to make decisions that will help them meet their personal goals?
Is provider satisfaction higher or lower?
Do new workflows help providers complete their tasks more effectively and efficiently? Do providers feel as if new data tools or dashboards give them valuable information they could not access before? Do providers feel confident that artificial intelligence tools are trustworthy and reliable for decision-making? Do they feel threatened in any way by AI?
Are clinical outcomes improving?
Is medication adherence increasing? Are patients with targeted conditions performing better during routine screenings or monitoring? Are the rates of new chronic disease diagnoses decreasing over time? Is avoidable hospital utilization dropping? Are patients less likely to be diagnosed with a hospital acquired condition (HAC)?
Has performance on standard quality measurement frameworks changed?
How are HCAHPS and HEDIS scores changing? Is AI helping to improve performance in programs such as MIPS or Promoting Interoperability? What about other quality measures associated with risk-based contracting, accountable care organizations, or pay-for-performance contracting?
For projects with financial or cybersecurity goals, the questions asked and metrics collected may look a little different.
Organizations investing in machine learning for these areas may wish to monitor key performance indicators (KPIs) related to employee training and human factors risks around cybersecurity, measurable fluctuations in revenue or income, fraud and improper payments rates, claims clearance and denials rates, hospital utilization management, or documentation and coding quality issues.
No matter what the metric, organizations should be very careful about ensuring that the data definitions behind each measurement are clearly stated and shared appropriately across the organization.
All staff involved in performing analytics, generating reports, or using data to make decisions should understand exactly what each element means and how it should be applied to the questions at hand.
“Data quality and integrity is the foundation for everything else going forward, for both clinical and financial applications,” stressed Momin.
“You can’t report on any of those metrics if you can’t trust your data to begin with. And if you don’t extend that trust to the end-users, so that they understand what they’re seeing and believe it’s an accurate reflection of reality, you can’t create that data-driven culture.”
Developing a data governance team to oversee the process of creating, adjusting, and sharing data definitions is an important step for organizations that wish to accurately monitor their analytics progress.
Deploy, optimize, and iterate
Once an organization has decided on a use case, chosen a vendor, apportioned the correct resources, and established metrics to gauge success, it must make the final leap towards implementation.
Skipping any one of the fundamental planning steps could spell trouble down the line, a recent survey from Infosys indicates. Nearly half of respondents from organizations with well-defined implementation strategies saw measurable improvements on their targeted metrics, the survey said.
In contrast, just a third of participants from organizations without a comprehensive strategic plan saw progress towards their goals.
“AI initiatives should be thought of as an opportunity to reinvent every aspect of business for the better,” the authors of the report said.
“Business leaders need to evolve their skills and also gain a deeper understanding of the technologies that are driving their business forward. If they do not, they will not be able to maximize the benefits of their AI or their employees, and they might find that they themselves have become obsolete.”
Borrowing an approach from the software development world could help organizations stay nimble and adaptive while monitoring the ongoing impacts of a new process or tool, suggests Malaz Boustani, MD, MPH, Founding Director of the Indiana University Center for Health Innovation and Implementation Science of the Indiana Clinical and Translational Sciences Institute.
Agile implementation, which encourages organizations to respond quickly to changes through an iterative, collaborative approach, “allows you to avoid being paralyzed by perfection,” Boustani said.
“The first version of anything will not be perfect,” he pointed out. “If you can accept that your technique doesn’t need to be flawless before appropriate implementation, you can do much more – and do it quickly.”
“Develop a prototype, make sure you have sensors in place to collect meaningful feedback, and use that data to revise and react to what isn’t functioning at its best.”
This approach is suitable for a smaller-scale AI pilot or new care process, if not for a comprehensive rip-and-replace EHR implementation or system-wide technical overhaul.
Agile implementation allows organization to bring semi-independent teams together in a meaningful and flexible way. And a consistent feedback loop – based on well-defined metrics as described above – can keep conversations going around quality, impact, and satisfaction.
“There is so much frustration and burnout from trying to force people into using technologies and processes that don’t fit,” Boustani observed. “Agile implementation helps to bridge some of those gaps and provide a framework for achieving results.”
Continual optimization and opportunities for stakeholders to share healthy, collaborative input can help organizations grow a smaller AI pilot into a long-term strategy for becoming a more data-driven organization.
“When you are systematic and surround yourself with good people, good processes, and a comprehensive strategy, you will get the value you’re looking for out of your data,” Momin asserted.
“How you implement your analytics is just as important as what tools you choose, so don’t spend all your time on the technology without thinking about how you’re going to extract value from those systems.”