Christoph Wald, MD, PhD, MBA, F黑料网 |
The new 黑料网 Recognized Center for Healthcare-AI (ARCH-AI) program, a collaboration between the 黑料网’s Informatics and Quality & Safety programs, provides a blueprint for the safe and effective implementation of AI. The Bulletin spoke with Christoph Wald, MD, PhD, MBA, F黑料网, vice chair of the 黑料网 BOC and chair of the 黑料网 Commission on Informatics, who explained how the College’s latest recognition program can assist with AI implementation programs in practice.
What is ARCH-AI and how can it help ensure radiology practices use AI tools to their full potential?
The ARCH-AI program includes well-known and emerging best practices for radiology facilities to use AI safely and effectively. It is the first national quality assurance program for AI in medical imaging. The program explains which expert consensus-based building blocks of infrastructure, processes and governance are currently recommended for AI implementation. Radiology practices completing the will receive an 黑料网 recognition badge to proudly display in their waiting rooms and lobbies — to show their communities, patients, payers and referring providers they are committed to integrating AI in a responsible manner.
What are some of the challenges radiologists face when navigating today’s AI landscape?
There are many challenges navigating the AI landscape of commercially available, FDA-cleared products. First, practices need to identify and prioritize the issues they are trying to solve — be it prioritizing a worklist based on the likelihood of the presence of an urgent condition or finding a solution to detect pulmonary nodules. Second, and perhaps most importantly, the practice needs to evaluate the performance of AI tools on their local data and patient population to ensure the performance in the local environment is acceptable. Often, there are multiple products with similar functions — in which case other factors must be considered before choosing an AI solution. These considerations include cost and the ease of implementation and integration into radiologist workflow (including user interface where applicable). These are just some of the factors you should consider when navigating AI product selection.
Is there a misconception among radiologists about the purpose and functionality of AI tools?
The regulatory framework for AI in the U.S. considers commercially available, standalone AI products to be “software as a medical device,” which the FDA defines as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.” Products are grouped into risk classes based on the potential impact of the output on clinical care, which impacts the rigor of the premarket evaluation of the device. The “medical purpose” of the device (triage, detection, diagnosis or a combination) drives further classification. Currently, products in the “computer-aided triage and notification” (CADt) category of FDA clearance are most common in the U.S.
The ARCH-AI program includes well-known and emerging best practices for radiology facilities to use AI safely and effectively.
These devices are trained to identify an imaging finding that can be (but is not always) associated with an important medical condition relevant to patient care. Users can prioritize worklists or care based on that output. Users of CADt devices need to remember that the implication of the AI result is asymmetric. In other words, flagging a positive study means there is an increased probability of a condition, but “No-flag” does not mean a lower likelihood of the same condition, even if the device has been tweaked to maximize sensitivity. “No-flag” should be interpreted as standard risk (and therefore implies standard of care). This differs from CADe and CADx devices, which are designed and cleared for the detection and diagnosis of disease, respectively.
To guide appropriate utilization, radiologists need to take a proactive role in promoting a better understanding of what data sets AI technology was trained on and cleared to do in our practices to guide appropriate use. Importantly, only one of the currently available, FDA-cleared products in the U.S. is tested and/or cleared for autonomous function. It is critical to emphasize the importance of a qualified intended user, such as a radiologist, to consider and incorporate the output of the technology into their workflow and patient care.
What was the impetus for ARCH-AI?
黑料网 member radiology practices span the gamut of those leading in AI implementation to those who’ve not yet incorporated the technology. ARCH-AI was created based on input from pioneers to benefit those who have yet to implement the technology. ARCH-AI is designed to remind local practices to develop AI governance — incorporating cybersecurity checks, structuring AI model selection, performing acceptance testing and user training and monitoring AI model performance.
Along with these recommendations, the 黑料网 is also creating infrastructure for practices to use and support responsible AI implementation. These tools include an AI registry and the 黑料网 Connect software to facilitate processing and relay of AI transactional data to the registry. Incorporating these programs will make it easier to implement real-world AI monitoring and participation in ARCH-AI.
What other 黑料网 offerings are available to complement ARCH-AI?
The 黑料网 maintains the freely available online resource to help practices discover available commercial solutions. This ACR-curated library of FDA-cleared imaging AI allows convenient online review of candidate products by anatomy, disease or vendor criteria. Vendors increasingly provide transparent information about their algorithm training, which can help select a product likely to perform well in your local population. Please note that AI products deployed within imaging equipment classified as “Software in a Medical Device” are not currently cataloged in AI Central. Recently, AICentral.org added a new category of AI platform companies, which offer efficiencies in contracting, testing and deploying AI software.
The 黑料网 also created the , which was tested with a pilot set of sites and officially launched at RSNA 2024. The registry is designed to assist with local acceptance testing and continuous monitoring of deployed AI. De-identified data is transmitted from each AI transaction into the central Assess-AI repository using the freely available 黑料网 Connect software — capturing real-world data from the AI solutions, the image metadata and the radiologist reports over time. The 黑料网 completes an automated analysis and provides reports to the practice. Thus, local AI performance can be assessed over time and compared against benchmark data from other practices for the same use case.
Why is it important for practices using AI to track and monitor its performance?
The training and development of currently available AI is typically done on varying amounts of data obtained by the AI company. Testing and evaluation of the trained product is done on a subset of this data that the product was not exposed to during training. Before clearance by the FDA, any AI company must demonstrate the efficacy and safety of their product during such premarket testing.
When an FDA-cleared product is used in a local practice environment on local imaging data, the product’s performance chiefly depends on the generalizability of the underlying model and the similarity of the local data to the data the model was trained on. As such, FDA clearance indicates that the premarket testing and performance were acceptable but does not guarantee that the product works in the same way on a different population. Local practices have to verify this performance. So, it is critically important to ascertain local performance during initial acceptance testing and over time. Radiology practices may change imaging equipment and protocols, or acquire additional imaging sites over time; this can change the input data sent into an existing AI product and may impact its performance.
Are there other AI-related plans in the pipeline?
The 黑料网 is working to continuously improve AICentral.org, specifically updating products listed on the site and enhancing the site’s search functionality. 黑料网 members have also asked to create functionality for community-based ratings and feedback for AI products in AI Central. The 黑料网 is evaluating the feasibility of developing an experience similar to.
Except for a retinopathy detection algorithm in ophthalmology, all other FDA-cleared AI technology requires a human (qualified end-user) to stay in the loop and take responsibility for the ultimate patient care facilitated with the help of AI. As AI technology evolves, so will the ARCH-AI program. The program is designed to incorporate user group feedback into future iterations, and the 黑料网 is actively shaping the program to adapt to changes in the marketplace. It is likely that generative AI-based models will arrive on the scene soon, and the 黑料网 will partner with leading institutions and industry stakeholders to drive an understanding of the risks and benefits of this latest AI technology and how it might be incorporated into regulatory frameworks and our clinical enterprises in the future.
Learn more about the 黑料网 Data Science Institute® (DSI) AI Central — which houses FDA-cleared imaging AI products. Learn more about Assess-AI, the 黑料网’s AI registry. These 黑料网 programs assist practices in meeting standards.
Explore the . If you have questions for the author or want to share your experience with AI tools, contact Chad Hudnall. Follow the on X for radiology news and analysis.