FDA Proposes Framework for Artificial Intelligence-Based Devices

FDA announced plans to establish a new regulatory framework for medical devices that use advanced artificial intelligence (AI) algorithms in a discussion paper released on April 2. The document is the first step in FDA’s process to solicit feedback on AI devices, and is expected to be followed by draft guidance.

AI technology is evolving quickly, and the potential benefits of adaptive AI systems in relation to modern medicine are seemingly endless. FDA noted that the regulatory framework for advanced AI will need to be more flexible than the existing paradigm for software as a medical device. Currently, manufacturers must submit data to FDA for review before modifications can be made to software-based devices. This measure is not conducive to the rapidly-changing nature of AI.

Up to this point, devices with locked AI algorithms have been approved by FDA. These devices do not independently adapt or learn from their algorithms—they are changed by manufacturers, who train the algorithm using updated data and then manually verify and validate the new algorithm. FDA’s new focus is to outline a total product lifecycle approach for continually evolving or adaptive algorithms.

FDA’s proposed framework is based upon several factors: the International Medical Device Regulators Forum’s risk categorization principles, FDA’s benefit/risk framework, risk management principles in the software and practices from current premarket programs.

The framework considers that AI-based devices would fall on a spectrum based on whether they treat or diagnose, are used for clinical management or to inform clinical management, as well as the severity of the healthcare situation they’re being utilized.

Additionally, FDA’s proposed total lifecycle approach to AI will:

  1. Establish clear expectations on quality systems and good machine learning practices
  2. Conduct premarket review to demonstrate reasonable assurance of safety and effectiveness, and establish clear expectations for manufacturers to continually manage patient risks throughout the lifecycle
  3. Expect manufacturers to monitor the AI device and incorporate a risk management approach
  4. Enable increased transparency to users and FDA using postmarket real-world performance reporting to maintain continued assurance of safety and effectiveness

Manufacturers will likely be required to submit information on the algorithm’s performance, plans for modifications and their ability to manage and control risks of the modifications. FDA may also review the software’s predetermined change control plan. This would provide detailed information about the types of potential modifications based on the algorithm’s re-training and update strategy, and the associated methodology used to implement those changes in a controlled manner that minimizes risks to patients.

Orthopedic companies and surgeons are betting that artificial intelligence and machine learning will transform the delivery of care in and out of the operating room. The vast approaches in orthopedics include those taken by Imagen Technologies and HoloSurgical.

Imagen Technologies received FDA de novo clearance to market its OsteoDetect computer-aided software, designed to detect and diagnose adult wrist fractures. The software uses AI algorithms to analyze 2D x-ray images for signs of distal radius fracture, then marks the location of the fracture on the image to aid the provider in detection and diagnosis. HoloSurgical’s ARAI system was built to overcome the visualization limitations and lack of intelligent guidance offered by current robotic and computer-assisted surgery systems. The system provides a 3D real-time matched image overlay displayed directly onto the operative field, and its AI algorithms allow the technology to adapt to surgeons. ARAI has been used in human surgeries, but is not FDA cleared.

FDA’s discussion paper includes 18 questions for feedback, gathering data on what modifications should be considered in the proposed approach, what information should be submitted in a premarket review applications and what role real-world evidence plays in supporting transparency for AI. No timeline was given for the forthcoming draft guidance.


Send comments regarding this article to Carolyn LaWell.

Join us!

The best of BONEZONE content delivered to your inbox, twice each month.

RELATED ARTICLES



CONTACT BONEZONE

 

CONTACT BONEZONE