Healthcare organizations have their plates full, juggling the latest and greatest offerings and cutting-edge tools, all while figuring out budgets to support the advanced tech tools and procuring staff with the capacity to oversee them. One of these trending tools on every administrator’s mind is preparing for and implementing Artificial Intelligence (AI).
Read any article or watch a news clip about the subject, and there are common themes that AI is projected to bring to the table — you’ll hear words like:
- Predictive actions
- Preventative care
The bottom line is that AI is seen as a benefit to the patient, first and foremost. With continuous connectivity and the capacity to learn, AI would have the ability to improve clinical efficiency by collecting, aggregating, and analyzing patient data —the results of this learning can then be used to improve patient adherence, engagement, and to proactively advise on next steps. AI in a clinical setting would prove its effectiveness as it begins to transform the healthcare system into one more focused on preventative care as opposed to reactionary care.
AI garners this result by diagnosing patients faster so the care process can begin prior to treatments becoming grim and expensive. One day in the future, physicians will have patients receive toolkits every few months for virtual checkups that will produce convenient and timely diagnostics.
So the next question, since the value of AI is without question, is how exactly should an organization implement AI into their clinical setting? Whether an organization is considering developing its own AI algorithms, or leveraging a vendor product, some worthwhile areas to consider include:
Identifying a Need & Starting Small
Artificial intelligence isn’t exactly something that can be loosely implemented into a workflow or service line. AI must be positioned methodically, one by one, with tracking and measuring protocols in place.
Starting small can be a positive thing as it relates to AI. Machine learning tools already come with skepticism, so organizational advocates will need to prove their case with a well-defined program before receiving acceptance from a larger audience.
Organizations must create a thorough plan for how, when, and why adding AI to existing clinical or operational course of action makes sense. There also must be careful observation of the immediate and long-term consequences from doing so – especially as it relates to patient safety.
Deciphering & Choosing Transparent Vendors
AI is generally too complex for laypeople to understand. AI efforts are focused on synthesizing enormous volumes of data quickly and comprehensibly than an average brain could fathom.
Steve Griffiths, PhD, Senior Vice President and Chief Operating Officer of Optum Enterprise Analytics said, “A lot of organizations think that you bring in a technology vendor, you implement a data warehouse, and voila – you’re done. But if you don’t think about everything that goes along with it – training your people, engaging them to use the tools correctly, optimizing the workflows – then how are you going to integrate the technology into your organization?”
Griffiths suggests carefully evaluating what wrap-around services can be included in a vendor contract to ensure long-term, comprehensive support, even if its for a short-term project. These a la carte services could include on-demand customer service, an implementation expert on site during go-live, or the availability of cloud storage opportunities to supplement on-premise capabilities.
How analytics are implemented is just as important as what tools are chosen, so be sure not to invest so much time on the AI technology without strategizing how to obtain value from the platform.
Cybersecurity: Careful Considerations
In the day and age of gigantic data breaches and debilitating ransomware attacks, healthcare organizations must be hyper sensitive regarding their cybersecurity. Machine learning tools can, in theory, outsmart hackers by identifying suspicious activities in complex infrastructure programs.
With new threats occurring at the speed of light, using AI to anticipate or identify attacks that may appear different than previous attempts to break into a system, could help avoid exposing key patient data.
However, the bad news is that evidently hackers have access to the same tools as hospitals. A report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation, states, “Malicious use of AI could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).”
While security has big potential for artificial intelligence and machine learning, the underlying reality is that both hospitals and hackers have access to the same technologies. So it will be an important step in AI implementation to take all measures possible to prevent security attacks.
Patient Engagement and Management
From chatbots to consumer profiling, the case for patient-facing AI are largely focused on improving engagement and minimizing non-compliance to chronic disease management tasks.
Healthcare organizations can use machine learning to create personalized interactions that “meet patients where they are” —this enables providers to gather and transmit important data to improve the long-term management activity.
Predictive Analytics and Risk Scoring
Population health management is a top priority for nearly all healthcare organizations. AI is quickly becoming a critical tool to identify risks, the propensity for chronic disease, and accounting for the social determinants of health.
Machine learning can aid in the development of comprehensive risk scores by proactively identifying trends in lab results, diagnoses, or other clinical and social data.
Natural language processing tools could help to pinpoint socioeconomic terms concealed in free text documentation.
Bottomline: A Win-Win for Patients
Patients are arguably the reigning AI beneficiary among all clinical dynamics because they’re positioned to improve their adherence, wellness, and quality of life.
For example — currently a woman’s implantable cardioverter defibrillator (ICD) is equipped with bluetooth technology; the information uploads to a server while she’s sleeping and her practitioners can review the data remotely. If the patient is feeling poorly, she simply hits a button and it records a reading, then is electronically sent to her doctor’s office. The ICD will pace her and defibrillate her heart if needed, but she has to literally go into an electrophysiologist’s office to adjust her machine when necessary. With the implementation of AI’s future capabilities, it’s very possible the device would not have to be physically implanted with surgery, and the device could self-adjust vs. a physician needing to supervise the adjustments. Another possible AI improvement for the patient is that a clinician wouldn’t need to pick up the phone and tell the patient to stop exercising if her heart rate has dropped too low — instead, perhaps a wearable device could alert the patient to stop exercising and get her heart rate back to normal. And yet another potential benefit to the patient is when they’re traveling and away from the ICD home system base that transmits their data, the physician’s office is not able to remotely read the data during that time frame. AI could enable the data to be read regardless of how close the patient physically is to the base.
The possibilities of AI’s benefits will certainly bring encouraging results for patients, particularly those with chronic conditions. Meanwhile, healthcare systems will need to be methodical with selecting where and how to implement AI; there is too much at risk with patient safety to handle it any other way.