Artificial Intelligence / Machine Learning Demonstration Projects 2025

Crowdsourcing ideas to bring advances in data science, machine learning, and artificial intelligence into real-world clinical practice.

Optimizing New Patient Self-Scheduling Pathways with AI/ML

Proposal Status: 

Section 1: The UCSF Health Problem

At UCSF and other leading academic medical centers, the referral intake and triage process for new patients is strained, leading to long delays and high rates of incomplete referrals. A review of more than 100,000 referral scheduling attempts showed wide variability in wait times—from 8 to 73 days—with an average of 22 days [1].  Additionally, ~50% of referrals are never completed, largely due to operational bottlenecks and limited triage capacity [2,3].

These delays are not just administrative—they directly affect patient outcomes. In a consecutive cohort study of 648 patients with squamous cell carcinoma, the majority of patients developed significant signs of tumor progression within a wait time of 4 weeks [4]. For time-sensitive conditions like lung, kidney, and pancreatic cancer, each week of delayed treatment increases mortality risk by 1.2% to 3.2% [5]. In early-stage cases, delays in cancer care can raise 5-year and 10-year mortality rates by as much as 47.6% and 72.8%, respectively [5]

A key challenge lies in how referrals are routed and scheduled. At UCSF, the status quo workflow is labor-intensive. Referral documents arrive from outside providers as faxes or PDFs that must be manually reviewed. Currently, this triage process falls to practice coordinators, who work hard to manage a high volume of incoming cases while juggling many responsibilities. However, they typically do not have clinical training, and determining the right subspecialist and urgency level for complex cancer cases often requires input from nurse navigators or physicians. Even when marked urgent, referrals can take days to reach the right destination—such as in head and neck oncology, where the average triage time is over 4 days.

To improve access, UCSF has introduced a patient self-scheduling portal for some specialties. While this is a step in the right direction, the current system is limited in scope and with various levels of oversight. It relies on a series of 3 generic questions and lacks integration with predictive models or clinical context. As a result, it does not capture the complexity of real-world triage, which can lead to patients being misrouted. Specialty clinics may also be scheduled suboptimally, with patients who may have benefited from additional work-up (e.g., APP visit) or additional medical record reconciliation. These challenges lead to financial losses, reduced market competitiveness, and prevent subspecialists from focusing on high-value, "top-of-license" care.

While past efforts—like hiring additional staff or sending reminders—have helped incrementally, they don’t address the core issue: patients need better tools to navigate the referral process, and staff need support to triage more efficiently.

Our solution is designed to bridge this gap by embedding intelligent triage and decision support into the patient self-scheduling experience—helping patients land in the right clinic faster while reducing the load on care teams and improving access to timely treatment. Importantly, this project also aligns with 1 of the 4 UCSF Ambulatory Services Health IT Portfolio Initiatives for FY2025. 

Section 2: How might AI help? 

We want to enable safe, accurate, and most notably, intelligent self-scheduling for new specialty patients (Figure 1). Our solution consists of an AI-powered triage algorithm developed through collaboration with IIAM Corporation and taps into their existing software platform’s advanced capabilities. Once deployed, our algorithm will assist with the patient self-scheduling workflow, thereby reducing dependence on manual triage and improving the timeliness of care. 

Figure 1. UCSF Self-Scheduling Workflow and Proposed Intervention. The top row demonstrates UCSF’s current online self-scheduling portal. Patients are prompted to answer 3 in-descript questions about whether or not they have cancer, but many patients are unsure of their diagnosis. In the bottom row, we propose embedding IIAM’s algorithm into the online patient self-scheduling portal and using the associated referral documents to showcase provider names and appointment availability in accordance with the suspected etiology and urgency, maximizing clinical optimization (“top of license work”) and resource utilization. 

Our algorithm accepts previous medical records (e.g., clinical notes, radiology/pathology reports) as input data, including external documents. To ensure a smooth workflow, our algorithm will be flexible enough to accept multiple file formats including PDFs, images, and plain text files. The uploaded information is then reviewed by machine learning models to identify the patient’s current medical need and output the best matching subspecialty physician(s). For instance, a patient with a suspicious thyroid nodule will be matched with the appointment times of head & neck surgeons specializing in thyroid surgery. This approach will increase the rate of correct patient-physician matching and one-contact resolution, reduce time to treatment, increase surgical conversion rates in clinic, and maximize “top of the license work” among clinical providers.

The IIAM software platform has achieved a referral accuracy rate nearly 30% higher than labor-intensive, personnel-driven status-quo workflows (90% vs 60%) -- even with limited patient information (90% accuracy rates with 30-70% patient records missing). This clinical effectiveness was confirmed using patient data from the two largest national cancer databases (SEER, NCDB) and three tertiary healthcare centers (UCSF, Hopkins, MGB). At UCSF, our product has undergone both retrospective and prospective validation. Our algorithm was 100% accurate in identifying malignancy. Our team hopes to utilize the UCSF AI Pilots program to develop a solution that is more tailored to the institution’s specific self-sceduling needs and refine algorithm performance via training on UCSF clinical data.

Section 3: How would an end-user find and use it? 

The AI tool will be integrated into UCSF’s existing online scheduling system, where it will prompt new patients seeking a specialty visit to upload relevant medical documents—such as referral letters, imaging results, pathology reports, or lab work. 

Once documents are submitted, IIAM’s AI analyzes the content to understand the underlying condition, clinical urgency, and appropriate subspecialty. Within seconds, the patient receives a tailored list of providers with real-time appointment availability, ranked by clinical fit and urgency, based on the content of their documents (Figure 2). The patient then selects from the AI-filtered list of appointment options, enabling them to self-schedule with an appropriate provider without waiting for manual triage, provider verification, or callbacks. If the AI determines that a more urgent evaluation is needed, it may recommend an earlier visit slot or flag the case for real-time escalation to nurse triage.

The system is designed to require minimal effort from the patient while maximizing the value of any existing work-up they have completed. Importantly, the AI acts as a behind-the-scenes assistant—not a gatekeeper. Patients retain the ability to view other available providers or request help if needed. For internal users (e.g., nurse navigators or access center staff), a clinical summary of the AI’s triage decision can also be displayed to assist in complex case management.

To minimize any errors, we intend to retain a human-in-the-loop during the triage and prior to scheduling for the first three months of the live pilot. If success metrics and accuracy rates remain high, the team will discuss and consider gradually scaling back the human-in-the-loop involvement. The algorithm provides a confidence score with every referral, and any referral that is associated with a low confidence score will automatically be flagged for human review. 

Section 4: Embed a picture of what the AI tool might look like. 

Figure 2. Proposed UCSF Self-Scheduling Workflow. Based on the patient’s pre-existing work-up and documents, the UCSF patient self-scheduling portal will provide the patient with an AI-filtered list of appointment options with subspecialists that treat the patient’s existing condition. 

Figure 3. UCSF Self-Scheduling User Interface. A picture of the patient user interface with the AI-filtered list of appointment options with and appropriate subspecialists

Section 5: What are the risks of AI errors? 

False negatives—cases where urgent conditions are missed—can lead to harmful delays in care. In contrast, false positives may cause patients to be seen earlier than necessary, potentially burdening provider schedules. To mitigate this, the algorithm is intentionally designed to favor false positives over false negatives when evaluating the urgency of a patient’s chief complaint. This approach was developed in close collaboration with oncology providers, based on the shared principle that it is preferable to evaluate a benign lesion too early than to delay care for a potential cancer patient.

Encouragingly, the algorithm has demonstrated a high level of accuracy across leading healthcare systems (Figure 4). It significantly outperformed the traditional personnel-based call center at Johns Hopkins, achieving 87% accuracy using a random forest model, compared to 60% under current workflows. At UCSF, the algorithm achieved approximately 90% accuracy when benchmarked against physician assessments and pathology-confirmed diagnoses (Figure 5). These real-world results suggest a reliable and scalable solution for streamlining referrals and improving timely access to specialty care.

Figure 4. IIAM performance at both UCSF and JHMI. IIAM performance on incoming referrals during 2024 at both Johns Hopkins and UCSF. At JHMI, status-quo referral workflows involve a centralized call center and EPIC-based random forest tree algorithms, with a baseline performance of 60%. The physician’s assessment/plan and any surgical pathology reports served as the ground truth.

Figure 5. Confusion matrix for IIAM ML algorithm and UCSF Head and Neck Pathologies. Pathologies were determined from physician assessment and any related pathology results during a patient visit. Of the pathologies, 96% of non-endocrine neoplasm pathologies matched algorithm pathology predictions; 85%  of benign lesion pathologies matched algorithm pathology predictions; 96% (26/27) of thyroid pathologies matched algorithm pathology predictions; 100% of parathyroid pathologies matched algorithm pathology predictions; and 100% of salivary gland pathologies matched algorithm pathology predictions. 

Section 6. How will we measure success?

Measurements using data that is already being collected in APeX: 

  • New patient referral volume
  • Number of patients scheduled per month via the self-scheduling portal
  • Time to triage
  • Time to schedule 
  • Time to treatment

Other measurements you might ideally have to evaluate the success of the AI 

  • Percentage of appropriately scheduled referrals (etiology and urgency)
  • Satisfaction scores (patients, providers, practice coordinators)
  • Percentage of incoming referrals scheduled (UCSF H&N baseline: 62%)
  • Surgical conversion rate aka optimal clinical utilization (e.g., cancer patients that are non-surgical candidates see medical oncology first; non-biopsied patients see ENT first; benign lesion or incomplete work-up see an APP first)

Section 7: Describe your qualifications and commitment: 

Our team combines deep clinical expertise with a proven track record of applying AI/ML solutions to real-world healthcare challenges. With a shared commitment to improving patient access and outcomes—particularly in cancer care—we are uniquely positioned to lead this initiative.

Katherine Wai, MD is a head and neck cancer surgeon-scientist in the UCSF Department of Otolaryngology–Head and Neck Surgery. She has published peer-reviewed research on the use of AI/ML to improve triage and referral processes for cancer patients, reflecting her dedication to innovation in care delivery. Dr. Wai is the principal investigator for UCSF’s pilot studies involving IIAM technology and will ensure the project meets and exceeds national quality improvement benchmarks.

Patrick Ha, MD is the Medical Director of UCSF Mission Bay Adult Services and holds the Irwin Mark Jacobs and Joan Klein Jacobs Distinguished Professorship in Head and Neck Surgery. An international expert in head and neck cancer research and outcomes, Dr. Ha brings deep expertise in NCCN and AJCC cancer guidelines. He will oversee the integration of AI-driven solutions into UCSF’s self-scheduling workflows and ensure alignment with UCSF Cancer Center's patient access and strategic goals.

Nicole Jiam, MD is the Director of the UCSF Otolaryngology Innovation Center and Chief Executive Officer of IIAM Health. A clinical informaticist with experience collaborating across leading academic health systems—including Mass Eye and Ear and Johns Hopkins—Dr. Jiam has authored multiple peer-reviewed publications on AI/ML in healthcare and holds patents from both institutions. She brings a rare blend of clinical, academic, and entrepreneurial experience, having served on advisory boards for health tech companies nationwide and as a former fellow with digital health-focused VC firms. Dr. Jiam will actively guide product development and participate in regular progress reviews with UCSF’s AER team.

Together, Drs. Wai, Ha, and Jiam lead a cross-disciplinary effort grounded in clinical excellence and operational insight. With active involvement from UCSF leadership, their work reflects a sustained commitment to transforming cancer care access through scalable, AI-powered solutions.

Supporting Documents: 

Comments

It's bad if a patient with cancer has to wait a long time to be seen.  It also seems like a bad outcome if they see the wrong provider (even if rapidly).  How often did that happen in testing?  Would there be a human in the loop at any point during the triage and scheduling and before the first appt?  Do you think it's necessary?  Has there been any prospective validation of accuracy, or were all validation tests using retrospective data? 

You're absolutely right—both delayed care and misdirected referrals are critical concerns. In our retrospective and silent-prospective testing, the AI correctly identified the appropriate subspecialty provider with ~90% accuracy. While these results are promising, we agree that a human-in-the-loop is essential for safety, especially in early deployment. In our current implementation, practice coordinators review AI outputs before final scheduling.