In the dynamic landscape of modern business, regulated industries—such as healthcare and the life sciences—stand at a critical juncture. Emerging technologies such as generative artificial intelligence (AI) present unprecedented opportunities to reimagine how companies work, innovate, and grow. President Biden’s recent Executive Order captures the need to take robust action to tackle some of the risks associated with AI technology. It contains a number of industry-specific recommendations for healthcare, including a focus on healthcare data quality as well as using AI to improve the safety of healthcare workers and patients, various equity considerations, and the use of AI in local settings.
One of the most striking requirements is the creation of a strategic plan for AI use in health and human services within 90 days, including regulatory implications and additional policy recommendations. From a life sciences perspective, the Executive Order contains some useful information on using AI for R&D in relation to bio-design tools and nucleic acid sequencing. It also touches on the issue of predicting potential misuse of the technology, which is an issue we have seen with misinformation, fake news, and bot-generated memes flooding social media. The lack of regulatory and legislative oversight in this space has presented an ongoing challenge, especially in highly regulated sectors like healthcare, despite recent developments.
Countries Establishing AI Policy
In June 2023, the European Union announced the forthcoming AI Act, which, if introduced, will create the world’s first comprehensive AI law, this was followed by the Cyberspace Administration of China guidelines a month later. However, at present, no regulations or legislation cover AI in the U.S., making the guidelines specified in the Executive Order all the more important. They provide a clear statement of the U.S. government’s position on AI security issues for the first time, setting out a clear roadmap and direction.
This development will help avoid the situation of individual U.S. states publishing 50 separate AI plans, which would create chaos for healthcare and life sciences companies operating in the AI space. The U.S. Department of Health and Human Services (HHS) will play an important role in tracking strategic plan updates based on the Executive Order to assess its ongoing implications from an AI perspective. While the Executive Order sets out a broad framework on these issues, sector-specific higher-level consideration is still needed to translate its principles into insights relevant to their respective industries.
The Less People Know—The More They Trust
The trustworthiness of AI information also has an important ethical interrelationship with human cognition and decision-making. Recent research conducted by Georgetown and Harvard Universities found that people typically trust AI more when they don’t understand how it works.1 Academics analyzed decisions made by employees based on recommendations from an easy-to-understand algorithm and one that is indecipherable. The report found that employees followed the guidance of the uninterpretable algorithm more often. This led the report’s authors to conclude that people’s lack of knowledge paradoxically creates greater trust in its recommendations.
These findings place a clear duty on companies operating in the AI and communications space to tackle AI literacy in order to provide a reliable guide to human decision-making and overcome potential issues of cognitive bias. AI has the ability to augment human creativity; however, the symbiosis of human intellect and technological advancement is key to unlocking its potential. Adopting these technologies is a transformative journey that requires understanding and a willingness to embrace new paradigms so that we can redefine what’s possible in our industries.
Reference:
1. https://hbr.org/2023/09/people-may-be-more-trusting-of-ai-when-they-cant-see-how-it-works.