Rationale: There is currently an unacceptably long delay in the time required to obtain study approval by the UCSF Committee on Human Research (CHR). The CHR must review and approve all studies involving human subjects performed by UCSF faculty, staff, or students. Studies with more than minimal risk to participants, such as trials of diagnostic tests or treatments, require in-depth review by a faculty-led committee, a process referred to as Full Committee Review. Currently, the average time from submission to approval of these studies is 84 days. This long wait time can be a major obstacle to implementing a new study by compromising grant funding, industry contracts, research staff support, and the general progress of science by UCSF Investigators.
The major source of delay in approval of Full Committee Review applications is the process of “returns”, wherein the CHR requests that the investigator make a change or modification to the proposal. Among initial CHR submissions, 75% are returned for modification by a CHR analyst because they are unacceptable for review by the Full Committee. On average, the investigator response to this initial return takes 16 days. When applications are returned to the investigator after a Full Committee Review, the average time for the investigator to respond is 21 days. Many applications have multiple returns that collectively increase the time required for CHR approval.
The aim of this project is to significantly decrease the duration of CHR approval time for Full Committee Review studies by reducing the number of applications returned to the investigator for revision. UCSF approval time for Full Committee Review is significantly slower than the national average; the national target is <42 days for the duration of the approval process. An intervention to minimize or eliminate returns could meet this national goal for CHR excellence.
Plan: We will work in collaboration with John Heldens, Director of the CHR, and CHR staff to develop effective strategies that decrease the number of Full Committee Review application returns. In 2011, the CHR conducted a review of 700 applications and identified several broad categories of common investigator errors. Using these data, we will focus on improving 2 areas that require frequent returns:
1. Administrative Errors: 50% of initial applications are either incomplete or missing required attachments. These returned applications result in significant delays because an incomplete application cannot be submitted to for Full Committee Review.
Improvement Plan: We will review 100 randomly selected applications returned for missing or incomplete components to identify the most common errors (e.g. missing consent form or incomplete investigator descriptions). We will then work closely with CHR staff and the iMedRIS administrator to develop interventions that minimize these administrative errors, such as programmed iMedRIS alerts that prompt the investigator to complete required sections and attach required documents. We will then use logistic regression models to evaluate the proportion of returned applications among 100 that use the new interventions compared with 100 applications submitted with the current iMedRIS format.
2. Content Errors: The second most common cause of returns is significant deficits in the content of the application such as an inadequate description of study procedures or the data safety and monitoring plan, or an incomplete explanation of the study population or recruitment procedures.
Improvement Plan: In collaboration with CHR staff, we will create a “Content Score”, a summary score that reflects the overall quality of the application in terms of key content areas that when insufficiently addressed put the application at a high risk for return. To validate the use of the Content Score, we will use linear regression models to determine if a poor Content Score is associated with slower time to CHR approval among 100 randomly selected applications. We will track these applications for the number of returns, length of time for investigators to resubmit after each return, and the total number of days before CHR approval.
If the Content Score proves to be predictive of time to CHR approval, we will use the Content Score to develop targeted interventions to minimize returns. Possible strategies might include notifying investigators with poor Content Scores that there is a 90% likelihood of a return and recommending further review of specific sections prior to Full Committee review, or incentivizing investigators to provide high quality applications by allowing those with favorable Content Scores to be prioritized for review.
The Content Score will also highlight commonly misunderstood sections or questions of the CHR application. This will facilitate the development of targeted changes in the iMedRIS application to improve investigator responses and subsequently reduce return rates. Possible iMedRIS changes include questions being re-phrased or re-formatted, showing quick links to examples of high quality responses, or including links to other sources of information and help.
Criteria and Metrics for Success: We will evaluate the success of our project based on significant changes in the time for CHR approval of Full Committee Review applications. In the first 3 months after completing the interventions developed in this project, we will calculate the proportion of applications with administrative errors, the mean number of returns per application, and the total time for CHR approval of all Full Committee Review applications. We will then compare these outcomes to all Full Committee Review applications submitted in the 3 months prior to initiating the new interventions. Metrics of success will be:
- A decrease in the proportion of applications with administrative errors to <20%
- A 20% decrease in the proportion of applications returned for content errors
- A decrease in the total time for CHR approval to <43 days
Budget: We request $50,000 to complete this project. The project involves significant data collection and analysis and iMedRIS programming and testing. Funds will be used to support a programmer/analyst to store, clean, manage and analyze data. Additional funds will support 5% effort for the clinical investigators, 2 CHR analysts, the iMedRIS adminstrator, and the CHR Director. These key personnel will work together to develop and test effective interventions to decrease CHR approval time.
Collaborators: Vanessa Jacoby, MD, MAS will be the principal investigator for this project. Dr. Jacoby is an Assistant Professor in the Department of Obstetrics, Gynecology, and Reproductive Sciences with a clinical research program focused on surgical treatments of common gynecologic conditions, such as uterine fibroids. She has advanced training in clinical research methods and has conducted multiple studies requiring Full Committee Review. Dr. Amy Gelfand, MD is a Clinical Instructor in the Department of Pediatric Neurology with clinical and research expertise in the care of children with chronic headaches. Dr. Gelfand is currently leading a project to simplify the CHR application process for low risk chart review studies. She will apply her expertise and experience from this project to assist Dr. Jacoby in completing the current proposal. John Heldens is the Director of the CHR with a focus on decreasing the number of returns among CHR Full Committee Applications. Dr. Jacoby and Dr. Gelfand have collaborated with John Heldens, the director of the CHR, on previous projects and they will work closely with Mr. Heldens on the proposed project as well.
Commenting is closed.
Comments
CHR approval can be a
CHR approval can be a daunting hurdle requireing significant activation energy when initiating new studies. Efforts to improve and streamilne the process will help decrease lost productivity due to the types of friction described in this proposal. This proposal would have broad and wellcomed impact.
Consent forms that are too
Consent forms that are too complex or too technical and need to be edited for plainer language are also a source of delays in CHR approval.
We agree that incorrect
We agree that incorrect content in the consent form may result in a return of the application for edits and thus a delay in the approval time. Item #2 under our "Plan" should address this concern by incorporating the content and quality of the consent form into our proposed "Content Score".
I think this is a really
I think this is a really great idea - I like the methodology you are proposing to first analyze a randomly selected set of applications to identify the most common administrative errors. Just disseminating the list would be of benefit to investigators. And the "Content Score" concept is very interesting. In addition to determining whether or not the Content Score is correlated with time to approval, it would also be interesting to see whether a CHR staff intervention based on low Content Score was associated with a decreased time to approval compared to those without an intervention.
"50% of initial applications
"50% of initial applications are either incomplete or missing required attachments." Wow. That's crazy. It was my understanding when IMedRIS was first rolled out that initial changes and upgrades would be to the CHR side, but then there would be a focus on end users. I like the idea of reviewing a random sample of applications, but I'm wondering if some sort of survey of regular users asking their biggest frustrations might also yield interesting information. I'm so glad you're including an IMedRIS programmer as part of the team, because I'm guessing simplifying the process on the user end (for example making consent forms easier to attach) could go a long way in improving approval time.
Thank you for this nice
Thank you for this nice feedback on our project. We agree that CHR user input could improve our understanding of the high rate of returns. In developing this project, we did discuss including a survey of CHR users to identify commonly misunderstood parts of the application. However, we ultimately felt that this would increase the scope of the project somewhat beyond the budget and time limitations of this pilot proposal. In lieu of a user survey, we have described how we believe that the Content Score will identify commonly misunderstood sections or questions of the CHR application to allow us to develop targetted improvement in these areas.
I think another aspect that
I think another aspect that should be looked at is the work load on current CHR analysts. I have seen same analysts assigned several studies at the same time, which delays the review process of submitted applications and consequently study approvals.
That is another very good
That is another very good point. As part of a separate project to analyze our business processes, we will be reviewing how work is distributed among HRPP analysts. However, the success of this pilot proposal would mean HRPP staff would spend far less time on each application.
This is a great project idea.
This is a great project idea. It might be helpful to perform a quick assessment of CHR's 'capacity' and current resource utilization. The problem could reside in having higher demand (number of studies) than can be handeled by the current CHR structure. Alternatively, this assessment could also reveal bottlenecks in the process as mentioned in the previous comments (i.e. high number of studies per analyst, IMEDris limitations). Overall, the success of this project can have a great impact on the protocol approval process, and ultimately research as a whole across UCSF and affiliated institutions.
Thank you for your feedback
Thank you for your feedback and support. We decided to focus on reducing administrative and content errors because those have proven to be very difficult problems for us (HRPP) to solve on our own, and improvement in these areas would have a very positive and tangible impact on reducing effort for researchers, HRPP staff and IRB members. HRPP is currently undergoing a separate analysis of our business process, and we continue to look for additional ways to reduce the number of submissions.
This application addresses a
This application addresses a very important issue, and in general the approach appears very sound. However, there is a lack of detail concerning the specific methods used to determine outcomes, such as "comparing the initial return rate among 100 applications that use the new interventions compared with 100 applications that are submitted " and"
I suggest that the applicant include a statistician in the application in order to
1) develop a statistical approach to determining outcomes and
2) to explore use of a data mining or machine learning approach towards identifying features in applications which take longer times for review.
We appreciate this feedback
We appreciate this feedback about the lack of detail in our statistical plan. To address this critique, we have edited the proposal to include further detail of the multivariable models we wil use for analysis as well as the approach to measuring the "metrics of success" that this reviewer outlines as #1-3.
The PI of this study has advanced training in biostatistics and a masters degree in clinical research. She has completed many studies with similar statistical approaches that we plan to utilize in this proposal without the support of a biostatistician. Therefore, we believe we will be able to complete the analysis without additional staff support from a biostatistician.
This is a great idea, and it
This is a great idea, and it would be beneficial for improving turn-around times for Expedited applications as well.
You might consider using an A/B testing tool such as Google Website Optimizer or Optimizely to assess any changes to the iMedRIS interface. These tools make it very simple to run randomized, controlled web experiments. For example, to test an intervention to address the completion/attachment problem, you could setup an experiment where one third of users (or applications) use the current 'sign-off' page, one third see relevant warning messages before signing off, and one third use a generic, required checklist. The tools would randomize people or applications to the different versions and output a list of which users/applications received which version. Then you could use this list to analyze the various outcomes of interest across different versions. If you have enough volume, you can create multivariate experiments as well.
Can you give more detail about the Content Score? Would this be some sort of automated keyword algorithm screening before the application reaches the analysts? If the Content Score were not automated, I'm not sure how it would save time. Analysts already return applications that are not sufficiently detailed. If an application passes analyst screening, the requests for more detail come from the full committee. That said, having links to high quality responses would be enormously helpful.