For AI to flourish in healthcare, the industry must focus on the “algorithmically underserved,” said John D. Halamka, M.D., M.S., president of Mayo Clinic Platform, at the HLTH 2022 conference this month in Las Vegas. Giving visibility to the algorithmically underserved — individuals who do not generate enough data/are not well represented enough in health data sets for AI to make a determination — is just one requirement to overcome the prospect of AI bias in healthcare. And identifying and fixing sources of AI bias must be a focus area for an industry that’s striving for ethical and equitable AI development, shared Halamka.
For example, what if there was a national registry that hosted all the metadata needed to power the responsible development of algorithms for use in healthcare? Building this kind of standardization into the relatively black box nature of AI development is among the priorities of The Coalition for Health AI (CHAI), which launched earlier this year. In addition to his leadership of Mayo Clinic Platform, Dr. Halamka is a co-founder of CHAI, alongside Brian Anderson, M.D., chief digital health physician at MITRE.
What is CHAI?
CHAI’s mission, according to the organization, is to provide guidelines for the ever-evolving landscape of health AI tools to ensure high quality care, increase credibility amongst users, and meet healthcare needs. The coalition serves to identify where standards, best practices, and guidance need to be developed for AI-related research, technology, and policy.
CHAI was formed by Change Healthcare, Duke AI Health, Google, Johns Hopkins University, Mayo Clinic, Microsoft, MITRE, Stanford Medicine, UC Berkeley, UC San Francisco and others, and is being observed by the U.S. Food and Drug Administration and the National Institutes of Health, and the Office of the National Coordinator for Health IT.
One of CHAI’s primary goals is to help health IT buyers make informed decisions about the AI solutions they choose, based on academic research and using vetted guidelines. The coalition’s toolsets and guidelines are also designed to ensure underrepresented populations are not adversely affected by algorithmic bias. CHAI’s in-development “Guidelines for the Responsible Use of AI in Healthcare” will intentionally foster resilient AI assurance, safety and security, says the organization.
Below, CHAI co-founders Dr. Halamka and Dr. Anderson and CHAI member Suchi Saria of Bayesian Health discuss the importance and timeliness of CHAI’s mission, and share how the organization plans to prioritize patient safety, reliability, equity, transparency, and trust in the healthcare AI development process.
Q: What Are The Goals Of The Coalition of Health AI, And Why Is Its Existence So Important?
“The goal of the Coalition for Health AI (CHAI) is to develop voluntary industry ‘guidelines and guardrails’ to drive adoption of credible, fair and transparent health AI systems.
The power of machine learning to unlock better, more effective, healthcare delivery at scale is unquestioned. We have long moved past the ‘should we use AI in healthcare’ discussion to the ‘how can we design a framework for the responsible use of AI in healthcare, guided by the principle of health equity?’
To answer the latter question, we’ve assembled a community of stakeholders – including academic health systems and AI experts – to discuss crucial concerns like patient safety, and algorithmic bias, at workshops.
These robust discussions are being held in collaboration with federal observers such as the U.S. Food and Drug Administration (FDA), Office of the National Coordinator for Health Information Technology (ONC), National Institutes of Health (NIH), and White House Office of Science and Technology Policy.
What makes the work of CHAI important and extremely relevant – beyond its mission – is timing. We are at an inflection point where AI is poised to take off exponentially if we can come together to harmonize standards and reporting for health AI and educate end-users on how to evaluate these technologies to drive their adoption. We have limited time to develop and recommend shared standards and industry practices that ensure that all communities will benefit in the future.”
“AI as a field is evolving very rapidly. As a result, there is variable expertise amongst groups in how to go about implementing it correctly and evaluating whether what they’ve implemented is working. There is significant opportunity to accelerate AI adoption by sharing best practices and developing guardrails that the broader community (government, payor and provider groups) can benefit from.”
Q: What Are The Biggest Challenges With AI’s Application In Healthcare Today, And How Will The Coalition’s Work Help Overcome Some Of These?
“Understanding and trust are certainly two of the biggest issues today from a patient’s perspective.
For example, as a physician I have to ask myself, ‘does my patient understand the role that machine learning is playing in their healthcare?’ And, “should my patient trust that an algorithm has been trained on data that reflects their demographics?’
Ultimately, I can’t expect my patients to have confidence in AI if I myself don’t have confidence in how AI systems and models that help define policy and interventions are built and evaluated. And right now, it’s the lack of consistent standards, guidelines and transparency that is a potential threat to confidence and adoption.
Too often frameworks are discussed or deployed after widespread adoption of a technology. CHAI is trying to get ahead of the curve while Health AI is still in its infancy, to help determine the rules of the roads to ensure the most positive impact for the most patients possible.”
“CHAI brings together a diverse group of individuals with deep expertise in AI and health AI, it’s translation to different use cases, knowledge around regulation and reimbursement, and various policy levers to accelerate safe adoption.”
Q: What Is The Coalition Hoping To Achieve Over The Next 2-3 Years? What Would A “Successful” Outcome For The Coalition Look Like?
“CHAI is moving fast. After a summer and autumn of meetings – both virtual and in-person – we are rapidly finalizing our framework and recommendations to share publicly by year-end. While the short-term goal is to publish a framework, think of it as a 1.0 offering.
Given the speed at which AI is advancing, we fully expect that the initial framework and recommendations will need to adjust and calibrate as we study the care impact and develop actionable data.
However, one thing that will not change is CHAI’s prioritization of principles around patient safety, reliability, equity, transparency, and trust.”