Proposal Overview:
Assessment of graduate medical education (GME) trainees is a high priority of training programs. Training program directors must be able to certify that graduates are able to practice independently in their respective specialty/subspecialty. Most Accreditation Council for Graduate Medical Education (ACGME) programs, including the UCSF Department of Medicine (DOM) ACGME programs, have an assessment system that is designed to report milestone assessments to the ACGME and the organization responsible for certifying individuals (such as the American Board of Internal Medicine). Milestone assessments are assigned by programs for faculty or other supervising personnel to complete and return to the program for review by Clinical Competency Committees (CCCs). The relevance of the milestones to the complexity of physicianship as well as trainee and faculty engagement with the assessment process has presented challenges to collecting, analyzing and reporting the clinical competence of trainees for DOM training programs. We seek to create a novel, learner-driven GME assessment system, using Entrustable Professional Assessments (EPAs) in a newly developed information system that seeks to better assess trainee performance while empowering trainees to actively manage their own assessment process.
Specific Aims:
1- Design and implement an information system for all Department of Medicine educational programs that will collect and display trainee assessment data with customized views to allow trainees, advisors and competency committees to effectively assess trainee performance.
2- Change the culture of trainee assessment from program-driven to trainee-driven by enabling learners to pro-actively solicit faculty assessment in areas of need, monitor their progress with meeting assessment goals and to control the release of designated assessable elements to the clinical competency committee under the guidance of their faculty advisor.
Background and Significance:
Various frameworks have evolved over the years to guide how to assess and report the competence of trainees. The transition to a philosophy of assessing trainee outcomes began in 1999 when the ACGME introduced the six core competencies (patient care, medical knowledge, systems-based practice, practice-based learning, communication and professionalism). The introduction of the Next Accreditation System (NAS) evolved these concepts to include milestones as subsections of the core competencies. The internal medicine community began with 142 educational milestones which were subsequently reduced to 22 sub-competencies embedded in the original 6 ACGME competency domains. (2)
In recognition of the limitations of the milestone-based assessment process an alternative approach to assessment uses EPAs as a framework for assessing competence. EPAs reflect the degree of mastery of professional tasks that together constitute the work of a profession. (3) Successfully performing EPAs requires abilities in multiple competencies. Using EPAs for assessment recognizes that professional tasks are complex and provides a holistic view of the successful acquisition of professional skills.
Current UCSF DOM Assessment Strategy:
In response to this rapidly changing landscape our current assessment strategy utilizes both milestone and EPA based systems to assess our trainees. Our primary modality of assessment links specific milestones to trainee rotations and are sumatively assessed by faculty utilizing the E*value platform system. Our programs may deploy different tools (global assessments, procedure observations, 360’s, chart stimulated recall, qualitative comments) to assess their program specific milestones but most programs depend upon observational methods to record the initial milestone assessment. Clinical competency committees (CCCs) then meet to determine the summative milestone score that is reported to the ACGME and the ABIM.
Currently, there are a few EPAs in use in certain programs (Internal Medicine, Geriatrics, and Hematology/Oncology) that are recorded in a Qualtrics survey platform but those data are hard to collate in a meaningful way for use by the CCC. Therefore, CCCs are largely dependent upon the milestone scores obtained from direct faculty observations, a process that has many limitations. .
Analysis of DOM Assessment Strategy:
Program CCCs are reporting issues with their milestone assessment processes that have been in effect since AY2013-14 for the internal medicine residency program and AY2014-15 for the fellowship programs. First, compliance with completing the assessments is poor; across the DOM training programs, the overall compliance of assigned evaluations is initially only 50%. Much administrative time is spent trying to get completed evaluations to the required 70% level and often program leadership has to step in to improve faculty compliance. Reasons faculty cite for their challenges in completing assessments is “assessment fatigue” caused by multiple end of the month evaluations being due simultaneously, the need to assess as many as 22 milestones and overall frustration with the E*value platform. Faculty also state that they have learned to ignore or delete automatic email assessment reminders from E*value, further complicating compliance with our evaluation process. Faculty also states that direct requests from trainees for feedback are harder to ignore due to the personal connection of the request.
Second, the quality of the assessments is also in question. CCCs are now often basing their judgments on insufficient numeric scores within each milestone category. This has led our CCC to become concerned about whether we are fully able to assess trainee’s performances. Similarly, our CCC is unable to effectively identify areas of concern for struggling trainees, preventing appropriate remediation efforts.
In an effort to promote EPA based assessment, the internal medicine residency began exploring the use of an EPA-based assessment system in 2013. In 2013-4 we utilized a Delphi method to implement a group of 8 EPAs to help make our assessments more activity specific. (Table 1) We also identified twenty-two subsequent EPAs with at least 80% participant agreement that can serve to guide our future expansion of this network. (6)
Table 1. Current EPAs housed on Qualtrics Platform
Acute Care EPA |
Clinic Post-Discharge EPA |
Code Leadership EPA |
Discharge summary EPA |
Four habits clinic survey |
Multidisciplinary Rounds Inpatient Feedback |
Rounds Observation EPA |
Serious Illness Communication |
Although these and future EPAs contain of a robust menu of assessment parameters, we have been unable to implement them in a meaningful manner due to several limitations with our assessment technology as well as our inability to create a trainee-centered interactive assessment platform. Table 2 outlines our efforts at implementation thus far.
Table 2. Timeline of EPA Implementation efforts
Platform Used | System Advantages | Problems with Implementation |
Mahara Portfolio System | Trainees log own assessment data. Captures wide range of data. Trainees release data for feedback. | Unwieldy to add data. Inability of faculty to track progress. |
MyFolio System | Potential for single data platform. Ease of expansion to new EPAs. | E*value/MyFolio platform communication. |
Qualtrics | HIPPA compliant. Easy to use, edit, summarize data. Free use for UCSF. | Data management. Unable to summarize for CCC. Lack of interconnectivity with current assessment platform. Unable to utilize quickly on handheld device. |
Vision for Future Assessment Process:
While learning about our aforementioned concerns regarding assessment amongst our faculty and CCC, we have been engaging our trainees to formulate our future assessment process. Eight focus groups with a total of approximately 100 residents and fellows have been conducted over the last three years. Several themes have emerged from these focus groups that match concerns voiced by resident trainees across the country when there is inadequate supervision or feedback. (4-5) These themes are summarized in Table 3.
Table 3. Trainee themes from assessment focus groups.
Theme #1 | Trainee assessment that depends on summative feedback from end of rotation (i.e. milestone-based assessments) lack specificity to enable trainees to improve their professional practice. Trainees would prefer more timely, qualitative, in-person and practice based feedback to help them grow professionally. |
Theme #2 | Current assessment tools create scores and comments that are difficult for residents to understand or to access to help them follow their progress in real-time. |
Theme #3 | With the advent of the Clinical Competency Committees (CCC) mandated by the ACGME, trainees feel there is a lack of transparency or control over their own assessments leading to mistrust of the assessment system. |
Theme #4 | The current culture in our residency and fellowship programs does not facilitate constant, clear and continuous feedback from faculty to trainees. Trainees feel this acutely and often feel criticized when receiving anything less than superlative feedback from faculty. |
To take an initial step of addressing these themes we began a two month pilot project where trainees actively solicit EPA activity-based feedback from faculty or other professional staff. These evaluations are completely resident driven and are collected in Qualtrics. This “proof of concept” pilot enables us to examine the rates of completion of EPAs, willingness of residents to participate in the process and their satisfaction with the feedback they receive. This pilot is currently ongoing but has been well received by trainees and evaluators alike.
Although this pilot is an exciting advance, it only fulfills our goals of facilitating timely trainee feedback. The pilot does not address our goals of creating an accessible trainee information platform where trainees are able to drive their own assessments and highlight their best work for the CCC. It also only takes a small incremental step towards moving our assessment and feedback culture forward to be more open and meaningful.
Proposal Goals:
We plan to create an assessment system that addresses our CCC concerns of obtaining adequate assessment data for trainee promotion, trainee concerns of transparency, timeliness and quality of their assessments as well as faculty frustrations of “evaluation burn-out”. Our proposal is to accomplish this by expanding EPAs as well as creating a user friendly trainee landing page to consolidate assessment data.
Goal 1- To create a trainee “landing page” that enables residents to review their summative and activity-driven (EPA) assessment data with their advisors to facilitate understanding of their assessment data.
Figure 1- Mock trainee landing page.
Goal 2- Give trainees more control over their assessment data that is presented to the CCC for entrustment. Figure 2 demonstrates assessment modalities immediately available to the CCC. Assessment data in red is available to the trainee and their advisor. Through joint discussion they can release their best work for “entrustment” by the CCC. For each EPA, trainees must release three “entrustable” evaluations done by three separate faculty members for evaluation by the CCC.
Figure 2- Types of data available to trainees, advisors and CCC
![](webkit-fake-url://D0DECA17-59C7-46AD-991B-85A11B5C9CCA/application.pdf)
Goal #3- Improve timeliness, quality and specificity of assessment information for trainees while reducing evaluation burden for faculty by creating streamlined EPA assessment forms and handheld app to facilitate immediate completion. The assessment editing will be driven by ongoing feedback from trainees and faculty from the current pilot project.
Goal #4- Utilize collected data from new information system to become more transparent about CCC deliberations with trainees. With improved assessment data/information system the CCC will generate automatic entrustment emails for EPAs/procedures when trainees are signed off. Similarly, we will be able to target earlier and more focused development efforts for residents who are not reaching their milestones appropriately.
Team Members:
Jeff Kohlwes MD, MPH. School of Medicine. Department of Internal Medicine. Clinical Professor of Medicine. Director, PRIME Residency Program, Associate Program Director for Assessment and Feedback Internal Medicine Residency Program.
Patricia Cornett MD. School of Medicine. Department of Internal Medicine. Clinical Professor of Medicine. Associate Chair for Education, Department of Internal Medicine.
Vanessa Thompson MD. School of Medicine. Department of Internal Medicine. Assistant Clinical Professor of Medicine. General Medicine Clinic Assistant Medical Director San Francisco General Hospital, Internal Medicine Residency Director of Academic Development
Sumant Ranji MD. School of Medicine. Department of Internal Medicine. Associate Program Director for Quality and Safety. Internal Medicine Residency Program.
Irina Kryzhanovskaya MD. School of Medicine. Department of Internal Medicine. Senior Resident in Internal Medicine, Chief Resident-Elect.
Ray Chen MD. School of Medicine. Department of Internal Medicine. Junior Resident in Internal Medicine