Artificial intelligence arrives in our EHRs

In “2001: A Space Odyssey,” the epic 1968 film by Stanley Kubrick and Arthur C. Clarke, humanity makes first contact with an alien intelligence, and the course of history is irreversibly altered. Hailed as a watershed moment in science fiction, “2001” was considered way ahead of its time and raised a number of philosophical questions about what would happen if we ever encountered another form of life. Interestingly, the most noteworthy character in the film isn’t human or alien, but instead a new form of life altogether: an artificial intelligence (AI) known as the Heuristically programmed ALgorithmic computer 9000. HAL (as he is known colloquially) operates the Discovery One spacecraft, ferrying several scientists bound for Jupiter on a mission of exploration. Stating that he is “foolproof and incapable of error,” HAL’s superiority complex leads him to become the film’s antagonist, as he believes that human error is the cause of the difficulties they encounter. He eventually concludes that the best way to complete the mission is to eliminate human interference. When asked by scientist Dr. David Bowman to perform a simple function essential to the survival of the crew, HAL simply states “I’m sorry, Dave, I’m afraid I can’t do that.” Bowman is forced to disconnect HAL’s higher intellectual capabilities, reverting the computer to its most basic functions to ensure human survival.

Kubrick and Clarke may have been overly ambitious in predicting the progress of human space flight, but their call for concern over the risks of artificial intelligence seems quite prescient. Recently, billionaire entrepreneur Elon Musk (CEO of Tesla Motors and SpaceX) raised his concerns about AI, warning that, left unchecked, AI could be mankind’s final invention – one that could eventually destroy us. Other giants of the tech industry, including Bill Gates and Mark Zuckerberg, disagree. They believe AI represents tremendous promise for humanity and could usher in innovations unlike any we have ever seen.

In this column, we tend to favor the more optimistic view but also acknowledge that the proliferation of AI into our everyday existence has been alarmingly rapid in the past few years. Virtual assistants like “Siri,” “Alexa,” and “Cortana” (to name just a few) have become ubiquitous and are always listening, ready to receive our commands and find answers to our every question – even ones we don’t ask! For example, we are routinely amazed when our smartphones offer up unsolicited traffic updates or weather forecasts, anticipating our plans and behavior patterns. If you’re like most of us, you are more likely to find this helpful rather than terrifying, and actually welcome AI’s presence in your personal life without fear or concern. But are you ready for artificial intelligence to enter your practice and help you care for patients? Is the exam room too sacred a space to allow such an intrusion? The time has come for us to answer those questions and many more.

A few weeks ago, we attended a national electronic health records conference where a well-known EHR vendor unveiled the new features in the upcoming release of their software. One of the most noteworthy additions was an intelligent virtual assistant, designed to help providers care for patients. While this is not the first time AI has ventured into health care (see IBM’s “Watson”), it is the first time the idea has become mainstream and fully integrated into physician workflow. Much like the virtual assistants mentioned above, this one can use voice or mouse/keyboard interaction to find clinical information, simplify common tasks, and help with medical decision-making.

While exciting at first, the idea of artificially intelligent EHRs may sound terrifying to some who aren’t yet ready to trust any patient care to machines. Reassuringly, while the integrated virtual assistant mentioned above can make suggestions to guide physicians to the right data or offer decision support when available, it is primarily focused on interface enhancement to improve work flow. It is not yet capable of making true clinical decisions that remove the physician from care delivery, but computers that do the diagnostic work of physicians may be closer than you think.

Research done at Jefferson University in Philadelphia and published in the August 2017 edition of Radiology1 investigated the ability of deep-learning algorithms to interpret chest radiographs for the diagnosis of tuberculosis. The computers achieved an impressive reliability of 99%. While at first radiograph interpretation seems quite different than the diagnostic decision-making done in primary care, the fundamental skill required for both is similar: pattern recognition. To build those patterns, artificial intelligence requires an enormous number of data points, but that’s hardly a problem thanks to the continual collection of patient data through electronic health records. The amount of raw information available to these algorithms is growing exponentially by the day, and with time their predictive ability will be unmatched. So where will that leave us, the physicians, entrusted for generations with the responsibility of diagnosis? Possibly more satisfied than we are today.

There was a time – not long ago – when the body of available medical knowledge was incredibly limited. Diagnostic testing was primitive and often inaccurate, and the treatment provided by physicians was focused on supporting, communicating, and genuinely caring for patients and their families. In the past 50 years, medical knowledge has exploded, and diagnostic testing has become incredibly advanced. Sadly, at the same time physicians have begun to feel more like clerical workers: entering data, writing prescriptions, and filling out forms. As artificial intelligence assumes some of this busywork and takes much of the guesswork out of diagnosis, physicians may find greater job satisfaction as they provide the skills a computer never can: a human touch, a personal and reflective interpretation of a patient’s diagnosis, and a true emotional connection. Ask this of a computer, and the response will always be the same: “I’m sorry, doctor, I’m afraid I can’t do that.”

Reference

1. Lakhani, Paras & Sundaram, Baskaran, “Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks,” Radiology. 2017 Aug;284:574-82.

Dr. Notte is a family physician and clinical informaticist for Abington (Pa.) Memorial Hospital. He is also a partner in EHR Practice Consultants, a firm that aids physicians in adopting electronic health records. Dr. Skolnik is professor of family and community medicine at Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, and associate director of the family medicine residency program at Abington Jefferson Health.

Ads

You May Also Like

FDA tells food producers to cut the trans fat

Food manufacturers will no longer be allowed to include partially hydrogenated oils (PHOs) in ...

Plurality main factor in adverse ART outcomes

AT 2014 ASRM HONOLULU (FRONTLINE MEDICAL NEWS) – Modest increases in the risks of ...

Target self-medication of mood and anxiety symptoms

EXPERT ANALYSIS FROM THE ANXIETY AND DEPRESSION CONFERENCE 2017 SAN FRANCISCO (FRONTLINE MEDICAL NEWS) ...