Today, developers of innovative medical devices are increasingly utilizing artificial intelligence (AI) and machine learning (ML) technologies to derive important insights with the promise of transforming the delivery of healthcare. Yet, concerns regarding the transparency of AI/ML-enabled devices, or the degree to which information about such devices is communicated to stakeholders, threatens not only perceptions as to the safety and effectiveness of such devices by regulators, but also trust in such technologies from patients and healthcare providers alike.
On October 14, 2021, the FDA convened a public workshop to gather stakeholder feedback regarding the role of transparency in enhancing the safe and effective use of AI/ML-enabled devices and potential mechanisms for information sharing. Below are three key considerations from the workshop that developers and marketers should keep in mind:
1. Fostering comprehension: Patients and healthcare providers both expressed concern with the availability of information to help them to understand the proper use and limitations of AI/ML-enabled devices. Patients emphasized the need to have enough information to have informed discussions with their caregivers about the use of the device and its limitations. Providers emphasized the need for explainable AI/ML technologies and requested sufficient information to enable informed discussions with their patients about how the technology works, what the results mean, the appropriate scope of use, and how to know when the device is not working correctly. Both patients and providers also emphasized the need to appropriately tailor information provided to stakeholders to ensure that such information is understandable and useable to the particular stakeholder.
2. Addressing equity in healthcare: Patients and healthcare providers also expressed concern with the potential for injustice and systemic discrimination in the healthcare system to bias AI/ML technologies. Patients and patient advocate groups expressed the need that AI/ML technologies be tested in diverse populations and in a variety of real-world use contexts. Providers echoed the sentiment that health equity be built into data collection efforts, that sufficient evaluations be conducted around race and ethnicity data, sex-specific data, as well as disabilities and comorbidities, and where such evaluations have not been conducted, that the known limitations of the AI/ML algorithm be clearly communicated to stakeholders.
3. Protecting patient data: Privacy and security safeguards for patient information were also significant topics of discussion at the workshop. Patients and patient advocacy groups underscored the importance of safeguarding personal and health data and expressed the need to understand how patient privacy is protected when obtained for and utilized by an AI/ML-enabled device.
Companies developing and marketing AI/ML-enabled devices should ensure that transparency is addressed throughout the product lifecycle—from the earliest stages of device design to the human factors and usability validation process to the creation of marketing materials for patients and healthcare providers—as promoting transparency across these three key factors can support the safe and effective use of the AI/ML-enabled devices, while also fostering trust and greater adoption of AI/ML technologies.