The Promise of Generative AI in Mitigating Bias in Continuing Medical Education

Continuing medical education (CME) is a top priority for healthcare providers in order to maintain their medical licensure, broaden and update their clinical knowledge, hone their skills, feed their curiosity, and ensure patients receive the best care and outcomes.

Education companies that develop content have a list of considerations to meet when creating learning materials. Among them include creating content that will meet desired learning outcomes, ensuring it is engaging for the target audience and fact based, and taking steps to reduce implicit bias considering pharma’s significant support of CME initiatives. In fact, commercial support accounted for 26% of CME funding in 2022.

Pharmaceutical and healthcare companies are financially supporting the development of CME material through educational grants submitted by medical education companies. Once grants are awarded, it becomes the responsibility of education producers to create fair and balanced learning activities with perspectives from a spectrum of published evidence around the topic. Disclosures including who or what organizations are involved in the development of the CME activity and the fact that the content is supported via grants must also be stated, but there is still a chance of implicit bias being introduced by the sponsoring companies.

Generative AI to Mitigate Bias

Artificial intelligence (AI) is everywhere in healthcare headlines, and the industry is collectively trying to understand its powers, limitations, ethics, and boundaries while also considering its real-world applications. While not a reality quite yet, one of the ways generative AI could benefit healthcare in the future is by mitigating bias from sneaking its way into CME materials supported by pharmaceutical companies. Certain considerations must still be worked out before deploying generative AI for this specific use case, but it could prove to be a promising application within the next few years.

The traditional development of CME materials is almost all done by humans. In addition to inserting their own points of view on certain topics, CME developers turn to standard literature, study trials, meta-analysis, and expert interviews to inform activities. They must also look at literature that refutes, or on the other hand, backs up whatever topic is being focused on.

In theory, generative AI could assist humans in producing this content—by similarly taking studies, data, and other content from a variety of differentiated sources as input artifacts to generate lessons, learning plans, materials, and graphics, rather than from a commercial supporter. These input artifacts must be selected by human curators who are subject matter experts and iteratively be processed through a generative AI model-training-test-validate cycle.

Surely, for higher iteration velocity, the input artifacts can be a mixture of curated and un-curated materials. As a precaution against “AI hallucinations,” the training process will need to adopt regular model validation and continuous monitoring. By fine-tuning the model through data selection and reselection, retesting, and a revalidation process, only then we will identify and fix shortcomings of generative AI’s hallucinatory output in this application.

Similarly, generative AI could take a completed learning activity, review it, and then call out where one view might be presented in favor of another view as a type of compliance bias check. By ensuring view-fairness in generative AI, not only can we mitigate knowledge disparity, but we also multiply trust on the learning activity.

Underserved Populations—Does Generative AI Have a Role?

In addition to helping mitigate potential bias from commercially supported activities, generative AI can also help reduce the chance for deepening negative outcomes for already marginalized patient populations. According to the CME Coalition, CME plays a key role in advancing solutions to address disparities experienced by socially disadvantaged populations as defined by their race, ethnicity, gender, education, income, disability, geographic location, or sexual orientation.

Generative AI models can offer suggestions for how to enhance a learning activity to reflect certain populations or even identify certain topics or populations that need more coverage compared to others that are already heavily covered or discussed. Centering universal design of CME with a diversity, equity, and inclusion lens to focus on a wide range of populations, especially those that are underrepresented, is a crucial starting point, and generative AI could act as a sentinel to ensure these underserved and marginalized populations or topics are being covered in learning material. In addition, it could also act as a checkpoint to ensure that the learning material is still on track to reflect the intended outcomes.

What Needs to Happen Before Generative AI’s Role in CME Becomes Reality?

The generative AI market for healthcare is projected to swell to nearly $22 billion by 2032. The promise of generative AI helping to mitigate any form of bias in CME activities is huge, but the execution of such does still require legwork before it becomes a reality. For example, generative AI systems would need to be taught certain parameters so they have the capability to understand context within the medical and healthcare domain. Systems would also need to be fed labeled data to determine what is fact based versus opinion based. Plus, it must be taught to understand an article might be biased because it mentions specific drug brands or names with strong suggestive benefits being implied. When it comes to generative AI identifying underserved populations that need to be included in materials, the systems would also need to be taught to review CME from a diversity and inclusion lens that includes high-quality research.

However, training AI in these ways is doable and it’s just a matter of time before we see CME as another space where generative AI could transform the industry and benefit patients at large.

  • Casey Jenkins

    Casey Jenkins is Vice President and Head of Product at epocrates. Casey has more than a decade of experience in product leadership, where he most recently served as VP of Product at Wiley and led a cross-functional global team supporting subscription revenue of $600 million. He also previously served as VP of Product at WebMD (with oversight of Medscape Education) and SVP of Platform Product Management at Cengage Learning.

Ads

You May Also Like

The Pharma Marketer’s View on Google Glass

Ever since the Explorer program put Google Glass in the hands of consumers, users ...