What Biden’s AI Executive Order Means for Healthcare and the Life Sciences

A robot hand signing a piece of paper. In the background is a gavel and overlaid on top of the image are scales which are commonly used to depict justice.

In the dynamic landscape of modern business, regulated industries—such as healthcare and the life sciences—stand at a critical juncture. Emerging technologies such as generative artificial intelligence (AI) present unprecedented opportunities to reimagine how companies work, innovate, and grow. President Biden’s recent Executive Order captures the need to take robust action to tackle some of the risks associated with AI technology. It contains a number of industry-specific recommendations for healthcare, including a focus on healthcare data quality as well as using AI to improve the safety of healthcare workers and patients, various equity considerations, and the use of AI in local settings.

One of the most striking requirements is the creation of a strategic plan for AI use in health and human services within 90 days, including regulatory implications and additional policy recommendations. From a life sciences perspective, the Executive Order contains some useful information on using AI for R&D in relation to bio-design tools and nucleic acid sequencing. It also touches on the issue of predicting potential misuse of the technology, which is an issue we have seen with misinformation, fake news, and bot-generated memes flooding social media. The lack of regulatory and legislative oversight in this space has presented an ongoing challenge, especially in highly regulated sectors like healthcare, despite recent developments.

Countries Establishing AI Policy

In June 2023, the European Union announced the forthcoming AI Act, which, if introduced, will create the world’s first comprehensive AI law, this was followed by the Cyberspace Administration of China guidelines a month later. However, at present, no regulations or legislation cover AI in the U.S., making the guidelines specified in the Executive Order all the more important. They provide a clear statement of the U.S. government’s position on AI security issues for the first time, setting out a clear roadmap and direction.

This development will help avoid the situation of individual U.S. states publishing 50 separate AI plans, which would create chaos for healthcare and life sciences companies operating in the AI space. The U.S. Department of Health and Human Services (HHS) will play an important role in tracking strategic plan updates based on the Executive Order to assess its ongoing implications from an AI perspective. While the Executive Order sets out a broad framework on these issues, sector-specific higher-level consideration is still needed to translate its principles into insights relevant to their respective industries.

The Less People Know—The More They Trust

The trustworthiness of AI information also has an important ethical interrelationship with human cognition and decision-making. Recent research conducted by Georgetown and Harvard Universities found that people typically trust AI more when they don’t understand how it works.1 Academics analyzed decisions made by employees based on recommendations from an easy-to-understand algorithm and one that is indecipherable. The report found that employees followed the guidance of the uninterpretable algorithm more often. This led the report’s authors to conclude that people’s lack of knowledge paradoxically creates greater trust in its recommendations.

These findings place a clear duty on companies operating in the AI and communications space to tackle AI literacy in order to provide a reliable guide to human decision-making and overcome potential issues of cognitive bias. AI has the ability to augment human creativity; however, the symbiosis of human intellect and technological advancement is key to unlocking its potential. Adopting these technologies is a transformative journey that requires understanding and a willingness to embrace new paradigms so that we can redefine what’s possible in our industries.

Reference:

1. https://hbr.org/2023/09/people-may-be-more-trusting-of-ai-when-they-cant-see-how-it-works.

  • Will Reese

    Will Reese is Chief Innovation Officer at Evoke, an Inizio company. With over 26 years in life sciences, Will has worked on 30+ pharma, biotech, and device launches across therapeutic areas and the commercial lifecycle, focused on innovation strategy leveraging emerging trends and omnichannel thinking from CPG, financial services, and B2B. He frequently speaks and leads workshops on applied innovation, branding, digital transformation, and customer experience best practices, turning great ideas into great experiences.

  • Matt Lewis

    Matt Lewis is Global Chief Artificial and Augmented Intelligence Officer at Inizio Medical. With 25 years of life sciences experience, Matt specializes in partnering with key stakeholders to speed time to decision leveraging artificial intelligence, advanced analytics, digital innovation, and bespoke consultancy. He has deep expertise in oncology/hematology, neuropsychiatry, and rare disorders, and has contributed to the launch of over 60 treatments globally.

Ads

You May Also Like

Innovators 2015: Strategies

PM360’s Innovations Issue, established four years ago, serves as a comprehensive guide to our readers, providing ...

The Sunshine Act’s Crackdown on Medical Reprints—What Pharma Can Do About It

In an effort to create transparency between pharmaceutical companies and physicians, The Sunshine Act ...