Artificial Intelligence / Machine Learning Demonstration Projects 2025

Crowdsourcing ideas to bring advances in data science, machine learning, and artificial intelligence into real-world clinical practice.

AI Generated Discharge Instructions to Improve Patient Care Transitions

Proposal Status: 

The UCSF Health problem
The After Visit Summary (AVS) is the sole document patients receive after an inpatient hospitalization. It contains vital information including a summary of their hospitalization, medication changes, follow-up appointments, future labs/imaging needed, and important points of contact. For many patients we care for at UCSF with significant medical complexity and followed by multiple subspecialties, this document can become lengthy and burdensome, sometimes dozens of pages in length. 

Over the past few months, we have engaged with patients, their families, nurses, and physicians to learn about their experiences with the AVS. We have amassed a list of issues within the AVS needing improvement, with numerous stakeholders identifying artificial intelligence (AI) as a possible solution to these issues: 

  1. From a physician perspective, generating patient-facing discharge instructions (one component of the AVS) is of variable priority, complicated by a non-standardized view based on the physician’s personal experiences, a fragmented understanding of the individual patient's health literacy in the setting of heavy physician turnover/discontinuity, and little time to complete this task thoughtfully amid competing clinical demands.  

  1. From a nursing perspective, delivering the AVS to the patient and family requires devoting precious minutes to clarifying the details in this document. Especially in times of limited staffing, this can represent a significant time burden and can introduce frustration around some of the areas of ambiguity in the document. Standardization, accuracy, debulking, and streamlining of information reduces questions at the point of discharge and improves nursing experience and satisfaction.  

  1. From a patient and family perspective, various AVS components including hospital course summaries, key medication changes, follow-up appointments, and return precautions are presented in a variety of formatting styles, often creating bloat and obfuscating details as a result. In addition, components including specialty-specific scheduling information and diagnosis-specific resources can add significant length to the AVS with no regard to a patient’s health literacy level and preferences in communication style. In isolation, this abundance of information can be helpful, but patients/families have expressed that this information can be exceedingly burdensome and conducive to disregarding other important aspects of the document. Literature reveals similar findings, with patient data from other institutions revealing themes of feeling overwhelmed by the amount of information/length in discharge paperwork and having poor clarity on follow-up plans. This can only be made worse by ambiguity or discrepancies in the provider-created discharge instructions within the document.

How might AI help?
Our vision is for AI to draft discharge instructions based on APeX notes and information currently templated in the current AVS and Discharge Summary. The large language model (LLM) will be trained to present information in a fashion that suits the patient’s stated preferences regarding health communication, a practice not currently standardized at UCSF but which represents a significant patient-centered advance in AVS design. Some possible preferences to be taken into account might be expected level of detail, inclusion of holistic health practices, use of visual information, and references to specific outpatient provider names for the patient rather than specialty names. It would use scheduled appointments and referrals placed within APeX, imaging/labs ordered, post-discharge orders and existing progress/consult notes to display information in the best format for that particular patient.

These instructions could potentially include a brief hospitalization summary and medication changes, as well as explaining the stage of scheduling for various discharge follow-up appointments and next steps to take, numbers to call, and the likely topics at each appointment. For example, "The Cardiology appointment with UCSF Cardiology on May 1, 2025 at 9:30 am will be to discuss your recently diagnosed heart failure. You can expect to discuss these medications: carvedilol, spironolactone, losartan, and furosemide." or "The Neurology Infusion clinic will contact you for scheduling your immunosuppressive Cytoxan for vasculiits. They will complete a prior authorization with your insurance company. For questions, call 415-514-****." By providing information in a succinct, standardized manner, we believe this will achieve: 

  • Significantly shortened and more patient-friendly AVS documents for the patient/family  
  • Less time spent by physicians and nurses on duplicating and sorting information in discharge documentation 
  • Better understanding of the purpose of upcoming visits and relevance of certain medications 
  • Subsequently, better adherence to documented medical plans  
  • Greater patient activation and engagement in completing treatment plans 
  • Greater patient trust in the health system   
How would an end-user find and use it?
This would be a highly visible intervention. The AVS (Figure 1) is often the sole document all patients receive after hospitalization, and it represents a written memorandum of high importance points from the clinician team to the patients and caregivers. The current generators (physicians) (Figure 2), messengers (nurses), and recipients (patients/families) would all immediately see a different document design, one that balances patient-centered communication with the need for a standardized workflow to ensure smooth clinical operations. 
 
Embed a picture of what the AI tool might look like.Figure 1: Discharge Instructions, typically on page 2 of AVS, where AI-generated information would go (patient view)
 
Figure 2: Discharge Instructions tab with sample AI-generated information displayed (provider view)
 
What are the risks of AI errors? What are the different types of risks relevant to the proposed solution and how might we measure whether they are occurring and mitigate them?
The various components of this AI tool will have varying degrees of risk and consequences: 

  1. Some aspects of the medication management plan (including contingency plans for symptoms like chest pain, weight gain, headaches, or other possible harbingers of serious disease) have a potential to be misrepresented by the LLM or miss important nuances. 

  1. Hospital course summaries can be significantly briefer than a Discharge Summary, but there is a chance of generating misunderstandings through oversimplification.  

  1. Follow-up appointments are presented as structured data but nevertheless could be erroneously reported or displayed by the LLM. Safeguards against this error include already-existing MyChart and phone reminder processes that occur prior to appointments. 

  1. Reason for visit and medications to be discussed: this data would be AI-generated using existing progress/consultant notes within APeX. There is more room for hallucination to occur in this aspect of the AI tool.  

Current practice patterns emphasize direct clinician (physician, nurse) clarification of the AVS with patients as they prepare to discharge, which may protect against some errors. Importantly, human error is a notable flaw of the current AVS generation platform, with several areas (including discharge instructions) requiring manual data entry to present information found in other areas of the same document.  

How will we measure success?
Measurements using data that is already being collected in APeX:
Initial success would be measured by assessing for the presence of high consequence errors (inaccurate medication dose, falsified hospital course information, erroneous appointment time or location) by comparing the AI-generated AVS with the currently utilized version in APeX.
 
Additional measurements ideally present to evaluate success of the AI tool:
Subsequent success would be measured by assessing for more low consequence errors (e.g., incorrect reason generated for a follow-up visit) and the burden of generating the AI-summarized instructions in the AVS. We have previously used some of these methods to help inform our work with AVS improvement. 
 
  • Qualitative feedback from patients and families (e.g., from the Patient and Family Advisory Council) 
  • Quantitative measurements of follow-up appointments/labs/imaging successfully scheduled/attended versus missed 
  • Quantitative measurement on number of phone calls/questions received regarding follow-up appointments by the Care Transitions Outreach Team (who contact patients post-discharge)  HCAHPS survey data regarding patient satisfaction on post-discharge transition 
Describe your qualifications and commitment:
We are academic hospitalists at UCSF who are motivated by our commitment to ensuring the patients we discharge have the best understanding possible of their care plan and next steps. With allocated protected time and a desire to collaboratively, systematically, and thoughtfully solve problems just like this, we are well equipped to dedicating the necessary effort into ensuring this project’s success. 
 
References
Omonaiye, O., Ward-Stockham, K., Darzins, P., Kitt, C., Newnham, E., Taylor, N.F. and Considine, J., 2024. Hospital discharge processes: Insights from patients, caregivers, and staff in an Australian healthcare setting. Plos one, 19(9), p.e0308042.

Schwarz, C.M., Hoffmann, M., Smolle, C., Borenich, A., Fürst, S., Tuca, A.C., Holl, A.K., Gugatschka, M., Grogger, V., Kamolz, L.P. and Sendlhofer, G., 2024. Patient-centered discharge summaries to support safety and individual health literacy: a double-blind randomized controlled trial in Austria. BMC Health Services Research, 24(1), p.789.
 
Supporting Documents: 

Comments

Will you plan to do a validation study to compare normal human-generated to AI-generated AVS?  I think you could assess both accuracy, and perhaps patient preference?

Important and clinically meaning employment of AI that would improve physician and patient experience (by lowering documentation burden and increasing patient / family understanding of hospital course and intended treatment / follow up), throughput (by lowering the energy it takes to discharge a patient) which would affect every discharge regardless of disposition. Patient satisfaction is also tied to these instructions and could affect adherence to treatment plan by improving understanding and highlighting importance. If the AI tool has a proposed DC instructions that can be reviewed / edited by the clinician you can also iterate on opportunities for improvement.

Good concept of AI application and potential to optimize especially physicians'/APPs' use of time, and I especially appreciate the vision of tailoring info to health literacy/communication preference (and potentially in other languages, for our pts with LEP), which is so often overlooked/underestimated!

Have a few thoughts/q's that might help clarify/improve the intent and impact:

1.DC Summaries are often practically completed after DC Instructions have been written (for non-SNF discharges - thus wouldn't be available for AI to utilize), as well as progress notes often have notorious copy/paste making it difficult to tell what are active vs old problems (esp as interval new/updated info on the day of DC wouldn't yet be available), as well as what are relevant problems from a pt's perspective (ex: transient hypophosphatemia that is listed as a problem) - one pitfall actually seems to be that AI-generated Instructions might more suffer from longer, rather than briefer, summaries that a physician/APP would then need to revise/shorten.

2. How would a pt's communication preferences be assessed + standardized as options in practice, for a clinician to then incorporate? (I could imagine one could prompt AI to write instructions "at a high-school level for someone newly diagnosed with XX disease, etc" vs something similar?) Would the vision of including visual info be harnessing existing APEX Educational handouts, AI-generated ones, other?

3. Agree with Mark Pletcher's earlier comment of likely need for some kind of comparison btwn human- vs AI-generated AVS, as reviewed by clinician/RN/pt/family reviewers using some kind of structured assessment tool, if one exists and/or is validated or not (otherwise impressions would just be subjective prone to sampling/other biases). One somewhat analogous model could be Dr. Ben Rosner's project on AI's use for DC Summary generation (since DC Instructions are related extensions)

4. Are there any existing baseline survey data from physicians/RNs/pts/families on current perceptions/estimated time spent of DC Instructions/AVS, to have a benchmark by which to subsequently compare this initiative's impact?

5. Metrics of success: (a) suggest also including qualitative feedback from physicians/RNs; (b) i like the idea of quantitative measurements of impact on outpt appts/labs but would worry there are too many confounders (exs: transportation issues or factors beyond a pt's control like an unexpected ED visit/readmission)

No need to reply here, just some food for thought, hope this was helpful!

Great project!  I like the emphasis on the patient-experience as I think that sometimes gets missed in the natural tendancy to build provider-focused tools.  I am curious if you envision that the language/literacy-level of the AVS is something that the patients and/or their caregivers could modify when viewed on CareWeb?