Proposal Overview:
Assessment of graduate medical education (GME) trainees is a high priority of training programs. Training program directors must be able to certify that graduates are able to practice independently in their respective specialty/subspecialty. Most Accreditation Council for Graduate Medical Education (ACGME) programs, including the UCSF Department of Medicine (DOM) ACGME programs, have an assessment system that is designed to report milestone assessments to the ACGME and the organization responsible for certifying individuals (such as the American Board of Internal Medicine). Milestone assessments are assigned by programs for faculty or other supervising personnel to complete and return to the program for review by Clinical Competency Committees (CCCs). The relevance of the milestones to the complexity of physicianship as well as trainee and faculty engagement with the assessment process has presented challenges to collecting, analyzing and reporting the clinical competence of trainees for DOM training programs. We seek to create a novel, learner-driven GME assessment system, using Entrustable Professional Assessments (EPAs) in a newly developed information system that seeks to better assess trainee performance while empowering trainees to actively manage their own assessment process.
Specific Aims:
1- Design and implement an information system for all Department of Medicine educational programs that will collect and display trainee assessment data with customized views to allow trainees, advisors and competency committees to effectively assess trainee performance.
2- Change the culture of trainee assessment from program-driven to trainee-driven by enabling learners to pro-actively solicit faculty assessment in areas of need, monitor their progress with meeting assessment goals and to control the release of designated assessable elements to the clinical competency committee under the guidance of their faculty advisor.
Background and Significance:
Various frameworks have evolved over the years to guide how to assess and report the competence of trainees. The transition to a philosophy of assessing trainee outcomes began in 1999 when the ACGME introduced the six core competencies (patient care, medical knowledge, systems-based practice, practice-based learning, communication and professionalism). The introduction of the Next Accreditation System (NAS) evolved these concepts to include milestones as subsections of the core competencies. The internal medicine community began with 142 educational milestones which were subsequently reduced to 22 sub-competencies embedded in the original 6 ACGME competency domains. (2)
In recognition of the limitations of the milestone-based assessment process an alternative approach to assessment uses EPAs as a framework for assessing competence. EPAs reflect the degree of mastery of professional tasks that together constitute the work of a profession. (3) Successfully performing EPAs requires abilities in multiple competencies. Using EPAs for assessment recognizes that professional tasks are complex and provides a holistic view of the successful acquisition of professional skills.
Current UCSF DOM Assessment Strategy:
In response to this rapidly changing landscape our current assessment strategy utilizes both milestone and EPA based systems to assess our trainees. Our primary modality of assessment links specific milestones to trainee rotations and are sumatively assessed by faculty utilizing the E*value platform system. Our programs may deploy different tools (global assessments, procedure observations, 360’s, chart stimulated recall, qualitative comments) to assess their program specific milestones but most programs depend upon observational methods to record the initial milestone assessment. Clinical competency committees (CCCs) then meet to determine the summative milestone score that is reported to the ACGME and the ABIM.
Currently, there are a few EPAs in use in certain programs (Internal Medicine, Geriatrics, and Hematology/Oncology) that are recorded in a Qualtrics survey platform but those data are hard to collate in a meaningful way for use by the CCC. Therefore, CCCs are largely dependent upon the milestone scores obtained from direct faculty observations, a process that has many limitations. .
Analysis of DOM Assessment Strategy:
Program CCCs are reporting issues with their milestone assessment processes that have been in effect since AY2013-14 for the internal medicine residency program and AY2014-15 for the fellowship programs. First, compliance with completing the assessments is poor; across the DOM training programs, the overall compliance of assigned evaluations is initially only 50%. Much administrative time is spent trying to get completed evaluations to the required 70% level and often program leadership has to step in to improve faculty compliance. Reasons faculty cite for their challenges in completing assessments is “assessment fatigue” caused by multiple end of the month evaluations being due simultaneously, the need to assess as many as 22 milestones and overall frustration with the E*value platform. Faculty also state that they have learned to ignore or delete automatic email assessment reminders from E*value, further complicating compliance with our evaluation process. Faculty also states that direct requests from trainees for feedback are harder to ignore due to the personal connection of the request.
Second, the quality of the assessments is also in question. CCCs are now often basing their judgments on insufficient numeric scores within each milestone category. This has led our CCC to become concerned about whether we are fully able to assess trainee’s performances. Similarly, our CCC is unable to effectively identify areas of concern for struggling trainees, preventing appropriate remediation efforts.
In an effort to promote EPA based assessment, the internal medicine residency began exploring the use of an EPA-based assessment system in 2013. In 2013-4 we utilized a Delphi method to implement a group of 8 EPAs to help make our assessments more activity specific. (Table 1) We also identified twenty-two subsequent EPAs with at least 80% participant agreement that can serve to guide our future expansion of this network. (6)
Table 1. Current EPAs housed on Qualtrics Platform
Acute Care EPA |
Clinic Post-Discharge EPA |
Code Leadership EPA |
Discharge summary EPA |
Four habits clinic survey |
Multidisciplinary Rounds Inpatient Feedback |
Rounds Observation EPA |
Serious Illness Communication |
Although these and future EPAs contain of a robust menu of assessment parameters, we have been unable to implement them in a meaningful manner due to several limitations with our assessment technology as well as our inability to create a trainee-centered interactive assessment platform. Table 2 outlines our efforts at implementation thus far.
Table 2. Timeline of EPA Implementation efforts
Platform Used | System Advantages | Problems with Implementation |
Mahara Portfolio System | Trainees log own assessment data. Captures wide range of data. Trainees release data for feedback. | Unwieldy to add data. Inability of faculty to track progress. |
MyFolio System | Potential for single data platform. Ease of expansion to new EPAs. | E*value/MyFolio platform communication. |
Qualtrics | HIPPA compliant. Easy to use, edit, summarize data. Free use for UCSF. | Data management. Unable to summarize for CCC. Lack of interconnectivity with current assessment platform. Unable to utilize quickly on handheld device. |
Vision for Future Assessment Process:
While learning about our aforementioned concerns regarding assessment amongst our faculty and CCC, we have been engaging our trainees to formulate our future assessment process. Eight focus groups with a total of approximately 100 residents and fellows have been conducted over the last three years. Several themes have emerged from these focus groups that match concerns voiced by resident trainees across the country when there is inadequate supervision or feedback. (4-5) These themes are summarized in Table 3.
Table 3. Trainee themes from assessment focus groups.
Theme #1 | Trainee assessment that depends on summative feedback from end of rotation (i.e. milestone-based assessments) lack specificity to enable trainees to improve their professional practice. Trainees would prefer more timely, qualitative, in-person and practice based feedback to help them grow professionally. |
Theme #2 | Current assessment tools create scores and comments that are difficult for residents to understand or to access to help them follow their progress in real-time. |
Theme #3 | With the advent of the Clinical Competency Committees (CCC) mandated by the ACGME, trainees feel there is a lack of transparency or control over their own assessments leading to mistrust of the assessment system. |
Theme #4 | The current culture in our residency and fellowship programs does not facilitate constant, clear and continuous feedback from faculty to trainees. Trainees feel this acutely and often feel criticized when receiving anything less than superlative feedback from faculty. |
To take an initial step of addressing these themes we began a two month pilot project where trainees actively solicit EPA activity-based feedback from faculty or other professional staff. These evaluations are completely resident driven and are collected in Qualtrics. This “proof of concept” pilot enables us to examine the rates of completion of EPAs, willingness of residents to participate in the process and their satisfaction with the feedback they receive. This pilot is currently ongoing but has been well received by trainees and evaluators alike.
Although this pilot is an exciting advance, it only fulfills our goals of facilitating timely trainee feedback. The pilot does not address our goals of creating an accessible trainee information platform where trainees are able to drive their own assessments and highlight their best work for the CCC. It also only takes a small incremental step towards moving our assessment and feedback culture forward to be more open and meaningful.
Proposal Goals:
We plan to create an assessment system that addresses our CCC concerns of obtaining adequate assessment data for trainee promotion, trainee concerns of transparency, timeliness and quality of their assessments as well as faculty frustrations of “evaluation burn-out”. Our proposal is to accomplish this by expanding EPAs as well as creating a user friendly trainee landing page to consolidate assessment data.
Goal 1- To create a trainee “landing page” that enables residents to review their summative and activity-driven (EPA) assessment data with their advisors to facilitate understanding of their assessment data.
Figure 1- Mock trainee landing page.
Goal 2- Give trainees more control over their assessment data that is presented to the CCC for entrustment. Figure 2 demonstrates assessment modalities immediately available to the CCC. Assessment data in red is available to the trainee and their advisor. Through joint discussion they can release their best work for “entrustment” by the CCC. For each EPA, trainees must release three “entrustable” evaluations done by three separate faculty members for evaluation by the CCC.
Figure 2- Types of data available to trainees, advisors and CCC
Goal #3- Improve timeliness, quality and specificity of assessment information for trainees while reducing evaluation burden for faculty by creating streamlined EPA assessment forms and handheld app to facilitate immediate completion. The assessment editing will be driven by ongoing feedback from trainees and faculty from the current pilot project.
Goal #4- Utilize collected data from new information system to become more transparent about CCC deliberations with trainees. With improved assessment data/information system the CCC will generate automatic entrustment emails for EPAs/procedures when trainees are signed off. Similarly, we will be able to target earlier and more focused development efforts for residents who are not reaching their milestones appropriately.
Team Members:
Jeff Kohlwes MD, MPH. School of Medicine. Department of Internal Medicine. Clinical Professor of Medicine. Director, PRIME Residency Program, Associate Program Director for Assessment and Feedback Internal Medicine Residency Program.
Patricia Cornett MD. School of Medicine. Department of Internal Medicine. Clinical Professor of Medicine. Associate Chair for Education, Department of Internal Medicine.
Vanessa Thompson MD. School of Medicine. Department of Internal Medicine. Assistant Clinical Professor of Medicine. General Medicine Clinic Assistant Medical Director San Francisco General Hospital, Internal Medicine Residency Director of Academic Development
Sumant Ranji MD. School of Medicine. Department of Internal Medicine. Associate Program Director for Quality and Safety. Internal Medicine Residency Program.
Irina Kryzhanovskaya MD. School of Medicine. Department of Internal Medicine. Senior Resident in Internal Medicine, Chief Resident-Elect.
Ray Chen MD. School of Medicine. Department of Internal Medicine. Junior Resident in Internal Medicine
Comments
Excellent idea and great use
Excellent idea and great use of contingencies.
As someone in the echo lab
As someone in the echo lab who spends inordinate amounts of time trying to identify the right contact for urgent results, I think this is a very important idea.
I completely support and echo
I completely support and echo the sentiments that Emily has articulated here. As an intern myself currently, I do think that there are many problems with our current communication system that relies on outdated technology and results in significantly compromised patient care efficacy. I wanted to highlight one part of Emily’s proposal that I think is the key part of this issue. Emily does mention CareWeb, which is an IT solution that was started and continues to be improved here by the UCSF IT department. While there are still ongoing interface/functionality improvements that can and will most likely be made in the near future, I think the main problem that’s being identified here is a problem of technology adoption. I like the idea of a Slack inspired interface and I do think that there are ways that those user-friendly aspects could be incorporated into CareWeb’s functions (I’m sure the UCSF IT department is welcoming this feedback for their future version designs), but I think that what needs to be changed first here is behavior. As long as there are care team members who continue to page from or to landlines and do not use a mobile app, we will continue to have to call and wait on inefficient landline communication. Unfortunately, I don’t think that any new app will be able to solve this problem fully. One might argue that the barrier to technology adoption with the current CareWeb app is large enough that it actively deters individuals from using it. This may be true for some users. However, I think the solution may lie in better user feedback and improvements to our existing home-grown CareWeb app instead of trying to build a new app from scratch with an IT grant. It would be no small feat to create and maintain a secure messaging app with the desired functionalities and there have been many past and existing companies that have already attempted to provide products to meet these needs. (See one of YC’s latest startups Stitch http://techcrunch.com/2015/09/21/stitch-is-slack-for-healthcare-messaging/)
While there’s much more to consider in crafting a lasting solution, from my perspective the basic minimum conditions that would be required to allow for a more efficient/modern mobile messaging based communication system are the following:
1. All care team members must have constant access to a two-way messaging device (whether that is an Ascom phone, the careweb app on their personal phone, or the careweb app on some digital device issued by UCSF)
2. Full wireless and data service needs to be available in all parts of the hospital
3. Stop using landlines. Landlines are fixed. People are not and it is the waiting by the phone that is causing missed connections, missed meals, and other inefficiencies. There can be incremental change and I would encourage those who do have the careweb app to message each other as much as possible on the app and avoid using landline communications. (Once we have a functional mobile messaging solution with conditions 1 and 2 above, I think it would actually make sense to get ride of landlines completely.)
These conditions are necessary but by no means sufficient. Even addressing these conditions, though, I’m sure will be a large project that will require significant due diligence and hospital bureaucratic processes. Once we can get the infrastructure and culture right, the use of the technology will follow.
Thank you for bringing up this topic and starting this conversation. It’s an important one that we need to have as an institution!
I love this idea as well. To
I love this idea as well. To add to the careweb discussion, I think the major barrier to adoption of the smartphone version is the poor quality of the app. I use it to send pages from my phone sometimes, however it is so slow and hard to use that I've found that is is usually faster to go find a computer and send a page from there. Whether an improvement of the existing careweb app, or development of a new app, this would be a very welcome improvement in communication in the hospital.
I inadvertently wore my pager
I inadvertently wore my pager to the airport recently and became the butt of relentless, mean-spirited mocking from the TSA agents. So this proposal would also reduce bullying.
As a VA-based clinician, how about an interface that would cut through firewalls and work equally well at all UCSF-associated campuses?
it would be great to figure
it would be great to figure out a way to make such an app work for ambulatory care as well!
For those of us who still
For those of us who still have beeper ptsd from frequent pages this is a great idea. It will take some education for those of us who are old and technophobic but this reads like something that would make a big impact!