aamc.org does not support this web browser.
  • AAMCNews

    How AI is helping doctors communicate with patients

    Hospitals are using chatbots to monitor patient health at home and to reply to patient messages. Do the benefits outweigh the risks?

    Artificial intelligence chatbot on mobile device

    Over his decades as an oncologist, Lawrence Shulman, MD, has seen amazing advances in cancer treatments, including the development of oral medications that patients can take at home instead of coming to the hospital for days of chemotherapy. But Shulman sees the risks of those innovations as well: The powerful drugs can come with complicated dosing and scheduling and can cause significant and even life-threatening side effects if a patient does not take the right doses of pills on the right days.

    That’s where Penny comes in.

    Penny is a bi-directional text message system, fueled by artificial intelligence, run by the University of Pennsylvania’s (UPenn) Abramson Cancer Center, where Shulman is associate director. Patients receiving oral chemotherapy for gastrointestinal cancers (and who agree to participate in the texting program) are contacted daily by Penny to confirm their medication plan for the day and to ask about their physical and mental well-being, including side effects. If a patient’s texted responses show reasons for concern, Penny alerts clinicians to contact the patient, which can lead to a phone call, a video check-in, or an in-person appointment.

    “When we send patients home” after surgery or a round of treatments, Shulman says, “I might not see them for weeks.”

    The technology has the potential to improve patient health by guiding them through complex medication schedules, keeping clinicians routinely updated about a patient’s condition, and enabling clinicians to step in at early signs of trouble.

    That’s one way that academic medical centers are using artificial intelligence to improve communication with patients, in hopes of improving the quality and efficiency of medical care. Chatbots — computer programs that simulate conversations with humans — are being employed to monitor the health of pregnant women as they approach delivery dates and of orthopedic surgery patients after discharge, and to answer messages that come in through online patient portals about everything from appointments to prescriptions to symptoms.

    A few such programs have been running for several years, and now their use is spreading with the rapid evolution of chatbots built on large language models, which are artificial intelligence systems that have been trained to understand and generate human language. Leaders in medicine hope these chatbots can help them respond to trends in medical care that were accelerated by the COVID-19 pandemic: more at-home care, spurred by the social distancing restrictions of the pandemic and by technological advances that enable people to carry out more complicated medical tasks without coming to a doctor’s office; and the exponential boom in the use of online portals for patients to communicate with their doctors, which has created a demand for responses that many doctors cannot efficiently handle.

    “In the post-pandemic world, our doctors are burned out and overburdened by trying to keep up with tremendous amounts of administrative paperwork, and questions from patients that are coming to their inboxes through patient portals,” says Jeffrey Ferranti, MD, senior vice president and chief digital officer of Duke Health in North Carolina, which this month launched an AI Innovation Lab and a Center of Excellence in partnership with Microsoft. “We have to figure out ways to use these new technologies to solve some of that and to let doctors be doctors.”

    Monitoring health conditions

    The chatbot initiatives that interact with patients are primarily designed to serve two broad functions: monitor health conditions and respond to patient queries.

    Northwell Health, in New York, uses a text-based chat service that reaches out to various types of patients, including birthing persons who are at high postpartum risk, patients with chronic conditions such as diabetes and heart failure, and patients returning home after certain surgeries. The service, Northwell Health Chats, is customized to each patient’s condition, medical history, and treatment. The chatbots send a message to start a conversation, posing a series of questions about the patient’s conditions, with choices of answers to click on or fill in. It asks follow-up questions based on the answers.

    Michael Oppenheim, MD, Northwell’s senior vice president of clinical and digital solutions, gives a hypothetical example for a patient living with heart failure: The chatbot asks for the patient’s weight, and notes if it is stable or has been changing much. It asks, “Are you feeling short of breath?” If the answer is yes, it asks, “How far can you walk before getting short of breath?” It asks about mobility and swelling.

    If the answers show areas of concern, the chatbot can respond with a series of escalating steps, such as making an appointment for an office visit or alerting a clinician to call the patient for detailed follow-up and a decision about next steps.

    “If a patient is coming in every three months, it’s hard to know what’s going on” with their health between visits, says Anne Flynn, MD, medical director of Care Continuum Transformation at Northwell Health and an internist with patients enrolled in the chat system. Through that system, “you have more insight on a more regular basis into how they’re doing.”

    One goal of the chatbot services, Oppenheim says, is to reduce readmissions by guiding patients to stay on track with their care at home and intervening early if there are signs of worsening health problems.

    So far, the reaction among patients has been positive, says Oppenheim at Northwell and Shulman at UPenn, although thorough assessments of patient satisfaction are still to come. (The patients choose to participate in these systems, so they might have a predisposed comfort with technological interactions.)

    “They describe it as their buddy checking in on them every day,” Shulman says.

    Responding to patient requests

    When messages come into the MyChart patient portal at UC San Diego Health, a select group of doctors has a new way of answering: Let a chatbot that is integrated with the portal scan the messages and draft responses for those that are not emergencies — such as queries about appointments, prescription refills, or test results.

    For example, some patients ask if a specific reading on recent bloodwork should concern them, says Christopher Longhurst, MD, chief medical officer and chief digital officer at UC San Diego Health. The chatbot drafts an answer that says the reading is in the normal range, or, if the reading is of concern, that someone will contact them about next steps (such as an appointment or making a plan to adjust their diet).

    Perhaps most important of all in this process: The responses are reviewed and revised by a clinician before they are sent, both to verify accuracy and to ensure that the message has a human tone rather than reading like it was generated by a machine.

    “A clinician absolutely has to remain in the loop and be engaged with the message,” says David McSwain, MD, chief medical informatics officer at University of North Carolina Health (UNC Health), which has been rolling out a MyChart chatbot option this year. The chatbot response “is not a substitute for clinical decision-making.”

    Also to ensure accuracy, the chatbots are not providing answers just based on what’s appeared on the internet, which is how chatbots most often used by the public (including ChatGPT) are trained. Rather, Longhurst and McSwain say the chatbots are trained on specific medical and health databases. They can also securely consult certain parts of the patient’s electronic medical records to make sure they fully understand the person’s history.

    With those processes in place, “these tools can allow a clinician to get through their ‘in’ baskets more efficiently and effectively, so that the patients receive a response more quickly,” McSwain says. “It can improve engagement” between doctors and patients.

    Responses drafted by chatbots raise some of the same questions about quality that chatbots raise about their use by students writing essays: Do the chatbots write better than people? A study published this year, based at UC San Diego Health, took physician responses to 195 questions that people posed on a social media platform (Reddit’s r/AskDocs), fed the questions to a chatbot to create its own answers, then had health care professionals evaluate the quality of the answers. The reviewers preferred chatbot responses to physician responses in 78.6% of their evaluations, based on such factors as empathy, tone, and thoroughness.

    Longhurst, one of the study authors, doesn’t see those findings as showing that chatbots are better than doctors at answering patient questions. His takeaway is that doctors under tight time constraints — such as when flooded with patient portal messages — write short and just-the-facts responses, while chatbots generate longer answers within seconds. Yet some of the chatbot answers were off base from the question or contained factual errors.

    To Longhurst, the study shows the value of using chatbots to quickly draft responses, then having doctors edit those responses and add their personal voice and expertise.

    Lessons: Keeping doctors and patients engaged

    Some of the main takeaways from these systems center on the importance of human engagement even when chatbots are doing much of the communicating. Aside from keeping doctors in the loop on messaging and clinical assessment, health systems need to supplement electronic interactions with personal attention.

    “Just creating a chatbot doesn’t create engagement,” says Oppenheim at Northwell.

    “It needs to be part of a broader engagement strategy,” he says. That strategy should include “taking the time to educate them [patients] why we’re doing this and what we’re going to do with this information. There’s a lot of paranoia” about security with electronic communications and databases. “Why am I getting these texts? Why are you [a chatbot] asking me this?”

    “If you start throwing conversation-starter chatbots at them [patients], they [the messages] are going to be ignored, they’re going to be blocked.”

    Other learnings shared by those overseeing the chatbot initiatives include:

    • Let people choose to opt into the systems, rather than including them automatically.
    • Be transparent about how patients’ information will be used and secured.
    • Be transparent about the role of chatbots in creating messages to patients. The MyChart responses sent by UC San Diego Health, for instance, include a statement that they were automatically generated, then reviewed by a human.
    • Include the clinician’s name on the message.
    • Find out how people prefer to get automated messages. At UPenn, Shulman says, the patients getting chatbot check-ins through their phones preferred text over calls, because they respond to a text at a time most convenient for them. They also prefer responding directly through a text than having to log into an app.
    • Be wary of the time demands on patients. “Some of the chats were too long,” says Flynn at Northwell Health. “The chats contained too many questions or unnecessarily repeated questions on consecutive days. Patients complained and would drop out.

    “We shortened the chats and changed how often they’re sent.”

    Ultimately, these efforts “will only be successful if three things happen,” Shulman says.

    “One, that it improves patient outcomes. Two, that it’s acceptable for the patients, that it makes their lives better. Three, that it makes the clinicians’ [professional] lives more efficient.”