CTSI Annual Pilot Awards to Improve the Conduct of Research

An Open Proposal Opportunity

Printable Proposal Content with Comments

Use your browser's print function to output proposal content only, with each proposal starting a new page. Print to Adobe PDF to produce a file. (Note: Chrome and IE9 do not support starting a new page for each proposal.)

Overcoming hurdles to patient-centered outcomes research

Type: 
Proposal Status: 

The Patient Centered Outcomes Research Institute is charged with facilitating informed choice by funding initiatives that create condition-specific registries designed to provide information necessary for patients to understand risks and expected benefits in terms of meaningful outcomes. Traditionally, registries require hiring personnel, not directly involved in the care of patients, to distribute surveys, review charts, fill in forms, and enter data into an offsite registry. We employed a method to generate registry data for joint replacement and spine surgery that uses existing data. All hospitals, clinics and emergency departments use ICD-9 diagnosis, procedure and CPT codes to describe every patient encounter. Unlike the electronic health record, these data are stored in a uniform format. The integrated claims repository employs an ontology, which is a set of terms commonly used by patients and physicians to describe patient demographics, etiology, comorbidities, procedures and outcomes such as complications, readmission and reoperation rates. The ontology script, adaptable to any field of medicine, includes parameter names and codes associate with the parameters. A program reads the script and generates the program that creates the integrated claims repository. The University of California Office of the President recently approved funding create a spine registry by replicating the integrated claims repository at the other UC medical school campuses. The UCSF Committee on Human Research (CHR) approved the project and Notice of Intent to Rely (NOITR) is streamlining approval at the other campuses. The ontology has also been adapted to query national and statewide inpatient and ambulatory datasets.

The first aim of this proposalis to establish a CTSI clinical registry consultancy to help researchers create ontologies to define integrated claims repositories and condition-specific registries. This consultancy will enable collaborators in other clinical fields to create preliminary data for PCORI grants. Steps for generating these data: 1) develop and validate ontology definitions through chart review and queries of the University HealthSystem Consortium database. 2) Define inclusion and exclusion criteria and parameters for the registry. 3) Gain CHR approval with NOITR. 4) Implement the integrated claims repository. 5) Create reports. 6) Replicate this work at other campuses and compile collaborative registry data. 7) Generate state- and nationwide statistics for the condition. Metrics of success are the number of publications and grant submissions arising from claims repositories established in other fields of medicine. This model will become self-sustaining through consultant fees and grant collaborations.

Regulatory issues being addressed include: Determining whether the registry activity is research or a quality improvement activity. What activities require patient consent and HIPAA authorization? What options are available for Business Associate Agreements? What are the concerns of the Privacy Office? What computer security controls are needed? How can data be added or extracted from the electronic health record? What are the restrictions when using cost data? How do you protect sensitive patient information when using iPads or iPhones for data collection?

The second aim of this proposal is to work closely with the CHR and Regulatory and Knowledge Support Director to 1) gain approval to set up integrated claims repositories at the 5 UC campuses, and 2) develop libraries of CHR and consent form templates, business associate agreements, and other regulatory documents. By the end of the pilot, a library of template regulatory documents will be created that will enable student and post-graduate trainees to produce notable results during a well-planned research rotation.

Budget: $70,000. The budget will provide 20% time to refine regulatory documents and establish the registry model beyond spine surgery, and support in the CHR and RKS offices to develop and maintain document template library.

Collaborators: Elizabeth Boyd, John Heldens

Comments

I believe this registry could potentially be used as a resource database for my evidence-based sample-size calculation tool. Your excellent list of outcomes relevant to spine research demonstrates the breadth of data needed within a given field and the importance of collaboration with subject-matter experts such as yourself to identify these. My plan it to create access to a non-specialized database and supplement that, as needed.

Thanks Joan, I can show you how to create a flexible script to select cases and define parameters from the National Inpatient Sample. Unfortunately, the only outcomes in this database are complications rather than the longitudinal outcomes available in Medicare, OSHPD or private payer claims. But you can stratify hospitals and physicians by case volume, type of facility and US region.

I like this idea. The Consultation Services (CS) Data Management Unit and Design Unit are already engaged in helping researchers query existing administrative and EMR-based databases to generate patient samples for research purposes; a "clinical registry consultant" could extend this service in a very useful way. Overcoming the regulatory hurdles and facilitating cross-campus registries could really make a difference in helping clinical researchers answer clinically relevant questions. CS would be happy to help develop a consultation unit or set of "niche" consultants to support this.

Thanks Mark. The Stata advice that you provided was instrumental in streamlining the code that we now use to automate the querying of the HCUP and decision support datasets.

I think the idea of establishing a clinical registry consultation service within Consultation Services is a good one that fits well with their overall team. RKS would be interested in partnering on the establishment of a library of consent form templates and other documents. I do think that this is probably more time-consuming than a 20% effort -- I would urge the PI to consider other aspects of completing the project -- for instance, a research assistant, the time of CHR staff to work on these templates, any data resources, etc.

This is a good idea, and the support would help HRPP standardize and streamline practices at UCSF. We should be able to leverage the work performed on this project to improve system-wide efforts. HRPP would be happy to collaborate on this.

Commenting is closed.

Action Research Program

Type: 
Proposal Status: 

Rationale The Program in Implementation Science has created a series of courses within the Training in Clinical Research Program that are designed to meet the didactic training needs of fellows and junior faculty, but lacks an experiential component.  To be effective, implementation research must accommodate the unique culture and context that define specific health care settings and communities.  An “Action Research” Program can help meet this need—which involves partnering with health care providers to improve their own practices, which in turn will enhance their working environment.  Action research’s strength lies in its focus on generating solutions to practical problems and its ability to empower health care providers, by getting them to engage with research and the subsequent development or implementation activities.  We propose an action research program to achieve at least 3 broad goals:

 

1. To design innovative strategies to improve health care delivery or public/community health in real-time in San Francisco by capitalizing on the experience and skills of an interdisciplinary team of UCSF implementation scientists... the goal of which is to improve health and care delivery in targeted settings using strategies that are designed to be patient-centered and to reduce total health care costs.

 

2. To provide a hands-on training and implementation experience for students, residents and fellows by involving them in a real-world project through all stages of development... the goal of which is to attract/encourage students to pursue careers committed to improving health care delivery, system redesign and community health programs.

 

3. To design an action research program that can be self-sustaining through reimbursements from stakeholder organizations and delivery systems who directly benefit financially from the impact of the action research program.

 

Plan

We request pilot funds for 1 year to develop and implement this program at UCSF.  It will consist broadly of 8 steps that culminate with the initial launch of a delivery system intervention. 

Step 1. Select a Partner with a “Hot Spot”. Hot spots are problem areas in clinical operations, quality or health outcomes that are identified by stakeholders (such as payors, administrators, providers or patients) as priority areas for intervention.  In our first cycle, we will focus on UCSF Medical Center ambulatory practices.  To identify potential partners, we will invite practice chiefs and administrative directors of these practices to submit brief, 1-page descriptions of Hot Spots in their practice that they would like the action research program team to help them intervene on.  Key criteria will include feasibilty of measuring and intervening on Hot Spot, engagement of clinic providers/staff, learner access to relevant data, staff, patients and providers, and degree to which addressing hot spot will help improve quality, reduce health care costs and enhance the patient experience.

Step 2.  Assemble Action Team.  Advertise volunteer/training opportunities to students and residents in medicine, pharmacy, nursing and dentistry.  Commitment of 2-4 hours per week for 4-6 months is required.  Limit to 6-8 students.  Identify key content/strategy experts from UCSF faculty and partners.

Step 3. Characterize Hot Spot with existing data sources.  Further characterize with administrative and/or medical record data to examine frequency, distribution, variability and predictors of the key process or outcome that represents the Hot Spot.

Step 4.  Conduct Literature Review of Hot Spot.  This will be performed by a combination of faculty and students.  Medical students will apply some of the principles taught in their EEBM (Epidemiology and Evidence Based Medicine) classes.

Step 5. Convene Launch Meeting and Design Workshop.  This will be a 1 or 2 day retreat in which the partner clinic (and staff) are brought together with the faculty and students on the Action Team.  The goal of the meeting will be to create a timetable with specific activities benchmarks for completing the project in a 4-6 month time frame.

Step 6. Conduct & Analyze Formative Research.  Interviews and observations will be performed to gain a greater understanding of the patient, provider, staff and system-level factors that contribute to the Hot Spot being investigated.  With close guidance from faculty, students will be charged with collecting and analyzing this data.

Step 7. Create Alpha-Version of Intervention Approach.  Combine data inputs from the literature review, the quantitative analysis of administrative/EMR data, and the formative research findings to design an intervention approach. 

Step 8. Launch First Iteration (Beta-Version) of Intervention; Collect Process Data

 

Criteria and metrics for success

  1. Submission of Hot Spot proposals from multiple clinics—shows interest/need for service.
  2. Requests to participate from multiple students/residents—shows interest/appeal to trainees.
  3. Implementation of intervention that has a significant impact (>10-20% change from baseline) on Hot Spot measure.
  4. Explicit plan for continued monitoring and refinement of intervention by participating clinic.

 

Total Budget: $70,904

Salary support for principal faculty and part-time research assistant.

 

Collaborators: Ralph Gonzales (Medicine); Margaret Handley (Epidemiology & Biostatistics); Sara Ackerman (Medicine); Joshua Adler (Chief Medical Officer; UCSF Medical Center)

Comments

Excellent idea. I suggest emphasizing even further the multidisciplinary (or at least multi-training level) team aspect of the program. Also, you might consider bringing in an expert on team-building/group dynamics to the retreat to provide the trainees with more insight into this area, and enable them to be even better prepared for the implementation of the project.

Thx. We agree, and plan for this to be a critical activity during the retreat--highlighting multiple dimensions of "teams" as you note... different learners, different disciplines.

Re: Select a Partner with a “Hot Spot”. Two weeks ago I gave presentation in two different workshops in the Netherlands. Health delivery hot spots were a topic in one. An ongoing project that achieved some success instantiated alternative agent-based models of the situation and its unique features, including agents representing providers and patients. Validated simulations revealed emergent phenomena consistent with “hot-spot” issues. Have you considered such an approach?

This sounds very interesting. Do you have any materials or references you can share?

I'll send a reply by e-mail, Ralph

I would also be interested in these articles. As we have built an agent-based mobile framework, I am curious to see how other health professionals are using agent based simulations. I would think for a proposal like this one, the inclusion of industrial and systems engineering professionals (similar to USC's approach) would round out the team.

1) Is there any way to extend your timeline? 4-6 months leaves little room for the inevitable delays in changes in plans, enough data to interpret on a run chart or by other means, and will also limit the data collection to help address the ROI. 2) It would be ideal to really encourage small tests of change early and fast. Changes that can be tested in one afternoon with 5 pts on one day, for example. Given that the program is 6 months or so long. In our 12 month program we found that even with significant amount of prep (writing aim statements, clarifying measures, insisting on reporting dates for quarterly data submission, and incentives of up to $1000 for timely data submission at the end of a year), the teams had substantial delays in getting started for a multitude of reasons. With that, they often really only had 8 months or sometimes less (due to unanticipated environmental changes such as EHR rollout delays, problems with quality of demographic or other data, etc). So, guiding and encouraging them to start testing ASAP would substantially improve the chances of getting some substantive PDSAs rolling (and therefore enabling data collection) within the 1st month or two. This will take a great deal of practice coaching, which you have planned for. For us, it took way more time than we anticipated, but your experience may be different. 3) The challenge of what is QI (rapid cycle improvement) versus an intervention. I really have QI perspective and experience and therefore am less well-versed in the IDS language, so my lack of knowledge maybe showing here. … We found that our practices did not do a single intervention to improve care, rather they did a multitude of things such as in-reach, out-reach, motivational intervening, staff training, etc. Many of these occurred simultaneously to make sure that processes worked to improve the measures. I think even in our division CRC improvement effort in required a multitude of steps—do you see the “package of steps” as the intervention? 4) The practices may also find it helpful to include balancing measures—measures that may inadvertently get worse as improvements are implemented, for example patient satisfaction or wait times, staff satisfaction, etc. 5) We found it nearly impossible to do literature reviews for all the variations that individuals brought up—the most useful for us was to have a toolbox of QI interventions (such as reminders) and in our case, equity tools (REAL data collection strategies, 1 question screen for health literacy, etc) to recommend and then focus on adapting this to the setting at hand. Also, even with our 20 sites doing largely preventive and chronic disease care, there were so many variations on a theme that we essentially had to do lit searches in real time and rarely had the leisure of being comprehensive. That said, I agree this would be ideal and perhaps your staffing plans will enable this in a way that we could not do. 6) For the in-person convening, you may find it most helpful to give them some background as you’ve described, but to primarily focus on very pragmatic issues such as confirming measures, planning their first PDSA cycle, working on team roles (who will do what, how will they work together, how to effectively communicate, etc). It may even be ideal to do some of the 1:1 practice coaching here to get things jump started.

Thanks so much Sunita! I've responded to each comment in numerical order below. 1. Our 4-6 month timeline is to implement the first small test of change. We hope to have established a monitoring/measurement system that will allow the clinical setting to continue to monitor and adapt. But i think you are right in that we should continue to play a more active role, and perhaps engage students to participate for a longer period of time. 2. We totally agree. And this will be a key part of our selection process.. making sure we pick projects that are amenable to a quick start-up. 3. I see QI and IDS as overlapping alot. One of the key distinctions, i believe, is that the IDS perspective places a greater emphasis on using formative evaluation and observational research in the intervention design; although adapting the intervention along the way is completely appropriate. Another benefit of an IDS approach is in conducting a more comprehensive/theory-driven process evaluation as strategy for continuous quality improvement (rather than typical QI PDSA cycles that rely primarily on one or two simple metrics). If we do this right, however, we hope to have a standardized approach that could be replicated by others. 4. Yes, this is a key part of our evaluation. 5. Funny you mention "toolbox" b/c we will also begin with a "translational toolbox" that we've developed through our courses. We hope to complete the literature review during the summer, before the formal launch. 6. Thanks... very pragmatic. Maybe we can get some help from the Center for Health Professions? :)

Terrific idea and one that could lead to both improvement in the care delivery system as well as increasing the cohort of health professionals trained in implementation science. I would recommend more explicit education of the HotSpot owners in microsystems work as one of the deliverables of the project so that the Hot Spot has the tools to continue to improve their environment on behalf of the patients even after the initial intervention is done. When a system understands the power of data driven improvement, they begin to see an entire set of opportunities to enhance their delivery system. This type of strategy could be facilitated by leaving teams of students in this microsystem for a longer time period.

Thank you Catherine, We will work these suggestions into our program plan. In addition, perhaps we can talk off-line about ways to include a stronger collaboration with the medical school--perhaps through Pathways or the Academy? I think leaving teams of students for longer periods of time could help to meet Sunita's concern that longer periods of time to get a full run chart going for feedback will likely be necessary.

Commenting is closed.

Extending Direct-to-Participant Recruitment on the Internet with Effective Online Self-Screening Eligibility Surveys

Type: 
Proposal Status: 

Rationale: Developing efficient and effective methods to recruit and screen participants for clinical research remains a major challenge for clinical discovery.

Traditional recruitment methods often involve building up large multi-site research networks and implementing cumbersome and resource-intensive screening processes that incur high fixed startup costs and delay timetables. These challenges hinder the scale and scope of otherwise worthy research questions and delay the production and application of clinical evidence to guide pressing clinical practice and policy questions.

Newer approaches have been developed to harness the broad reach, focused geographic targeting, and powerful economies of scale and efficiencies that are possible using direct-to-participant Internet recruitment. For example, we have recently implemented two studies, the WebTIA Project (completed) and a recruitment site for a stem cell trial for stroke (http://stemcellstudy.ucsf.edu, ongoing), and the CTSI’s Participant Recruitment Service has been working on delivering similar methods as a core service to accelerate the conduct of clinical research.

However, current Internet-based direct-to-participant methods have often been focused on the first step of identifying and targeting potential participants and matching them to potential studies, but less on how to effectively manage the volume of inquiries that these Internet-based efforts can generate.

For our most recent study recruitment website, we used an extensive online prescreening and eligibility process with participant-facing questionnaires to guide potential participants to self-report key eligibility-related questions. An automated algorithm then classified responses as concordant, discordant, or neutral for study eligibility in a way that accounted for differences in health literacy and for uncertainty in certain types of self-reported health information. This system allowed us to extend the capacity of research coordinators by prioritizing and focusing enrollment efforts on those individuals that may be most likely to meet final eligibility criteria and in ways that could be designed to prepopulate a study’s enrollment database after verification.

We believe that extending the Internet-based recruitment core currently under development through the CTSI Participant Recruitment Service with a robust and scalable pre-screening process would enhance the usefulness and potential for this recruitment method and would be the next key step toward the goal integrating pre-screening and enrollment activities with post-enrollment online clinical research tools.

Plan:

We propose to extend the Participant Recruitment Service’s Internet-based recruitment core by adding a self-reported pre-screening platform for a pilot study that provides:

1)    The ability to classify pre-screening responses as concordant, discordant, or neutral for study eligibility in a way that accounts for differences in health literacy and for uncertainty in self-reported health information.

2)    Dynamic online reports and readouts for study coordinators to help prioritize potential for eligibility validation and enrollment.

3)    Integration with post-enrollment research system to allowed for moving validated data elements to a study database.

Criteria and Metrics for Success:

Pilot Website Launch

# website hits

# primary prescreening forms initiated/completed

# who pass primary prescreening

# secondary prescreening forms initiated/completes

# enrolled from this source

% of overall recruitment goal

% prescreened/secondary screened/enrolled meeting diversity criteria

cost per enrollee

Total Budget: $33,810

$33,810 (12 months) - 5% FTE for PI; 7.5% FTE of the PRS Technical Recruitment Specialist; 5% FTE of the Marketing and Outreach Coordinator; Website Design and Programming

Collaborators:

Anthony Kim, Neurology

Nariman Nasser, Director CTSI Participant Recruitment Service

Comments

Focusing on developing an expertise around Internet recruitment is interesting. Key for this proposal will be to focus in on how it complements and extends the current suite of services and tools being developed by the UCSF Participant Recruitment Service (PRS) Nariman Nasser is the director of that program.

Yes, I just met with her this morning and we are working on ways to collaborate moving forward.

Another interesting example is: https://foxtrialfinder.michaeljfox.org/

Have you completed research to determine whether specific populations (i.e. minorities or elderly participants) are completing the online eligibility form at the same frequency as other populations? Alos, here's another site that was developed to match breast cancer patients with trials: www.breastcancertrials.org You might consider having a link on the eligibility page that would allow participants to search for similar studies if they do not think they are eligible for a given study.

Yes, concerns about disparities and barriers to access are certainly relevant here-- e.g. there is data suggesting that elderly Latina women are least likely to use the Internet for health information and there are other data that suggest that the so-called digital divide may be narrowing somewhat. Anecdotally, in our prior study of stroke and TIA, the average age of participants was 59 years old (range 23-88). In any case, thank you for providing the link and the suggestion. The clinical trials infrastructure in oncology seems comparatively well developed and the site provides an excellent example of what is possible in this area.

excellent idea. would be very helpful!

An interesting and worthwhile idea. The power of this would be to show that is generalizable to other studies. ie that other studies can define "an extensive online prescreening and eligibility process with participant-facing questionnaires to guide potential participants to self-report key eligibility-related questions" and that the automated algorithm can classify the results as you describe above. To show this, I would suggest focusing the metrics on demonstrating interest and applicability to at least one other research study.

This seems like an important issue and an innovative approach. I've done some work on trials recruitment for therapeutic trials in cancer, and in that realm the provider-patient relationship is key as providers tend to recruit their own patients. So the system is inefficient and clunky, but I'm not sure how else it could work. For that reason, this proposal would be even more compelling if you discussed what kinds of clinical research faces this problem of internet "over-interest" and demonstrate that websites are generating significant traction in recruitment. An earlier comment mentioned breastcancertrials.org Given the site's longevity, I assume it is generating traction but I'm curious to know more about which studies would benefit from this kind of approach.

Yes, the patterns of recruitment may be subject matter specific to a certain extent. For instance, we have had a large response to a Phase I/II clinical trial of a modified stem cell therapy for chronic stroke (stemcellstudy.ucsf.edu)-- a condition with a large pool or potential participants, few alternative therapies, and great (but as yet unrealized) interest in cell-based therapies by the lay-public. Many interested participants do not meet the narrow eligibility criteria and others are interested in cell-based clinical trials for other clinical indications. Here over three month, we have had 27,000 hits and over 500 have completed an extensive online eligibility questionnaire, and about 50 have screened preliminarily eligible.

Worthwhile idea and nice addition to our ongoing efforts to ramp up research participant recruitment services.

Commenting is closed.

RandomizationCentral.org: an open-source, web based randomization portal to support RCTs for the global research community

Type: 
Proposal Status: 

Rationale: An effective randomization system is crucial to the design, conduct, and analysis of any randomized controlled trial (RCT).  Despite the apparent ease of flipping a coin (either literally or with a computer algorithm), designing a reliable randomization system that meets the design, statistical and logistical requirements for real-world use in an RCT is a major challenge.  Multicenter critical care studies, for example, typically require extremely rapid access to a centralized system (24 hours/day), for randomization of time-sensitive interventions, while maintaining blinding and balanced recruitment between treatment arms and entry criteria strata.  Large scale NIH and industry funded trials can invest in designing, building and hosting web-based randomization systems that accomplish these goals, but the costs of doing this are prohibitive for smaller trials with more constrained budgets. The design and implementation of a state-of-the-art randomization system that could be made available to the global research community at very low cost would facilitate the conduct of properly designed and blinded RCTs and remove an important barrier to the conduct of interventional research.

One commentor gave us 2 web sites that were already set up to do what we had proposed. Thanks for the referrals. We were not aware of them before and had not found them on our search.

These are indeed usable platforms for randomizing patients in clinical trials. I have tested them out and they provide just the system that we were proposing to create. The prices are not unreasonable either: one system is better for smaller studies (http://www.randomizer.at)(start up cost is $600 for first 50 enrollees and then $5/ additional enrollee; the other system (http://www.randomize.net/) has a flat fee of $2500 and is independent of the size of the enrollment (good for large studies).

It would be a good idea if the CTSI could post these web addresses on the CTSI web site resource page to alert investigators that there are inexpensive randomizer sites available for their studies.

We will withdraw our proposal.

Comments

I think this is a GREAT idea! Clearly provides for a need encountered in many clinical trials, and should be self-sustaining with the right support model (charging a modest fee to users). However, are you sure this is not already available? I happened to come across the following two sites which, at least superficially, may be providing a similar service: http://www.randomize.net/?gclid=CNHJjP3_sK4CFeYERQod1COsRw http://www.randomizer.at/

I think this would be a very useful resource. As Director of the UCSF Dental Data Coordinating Center with trials in Boston and Southern California, the DDCC and I have had to develop remote randomization for person- and cluster-randomized trials -- actually we have concealment of the randomization schedule until consent is obtained and eligibility criteria met. We have worked with Forte Research System (formerly PercipEnz) to modify the OnCore CTMS software to accommodate our randomization schedules. A colleague from graduate school (Scott Hamilton) started a Bay Area company DynaRand (http://www.dynarand.com/) in 1998 to provide web-based dynamic randomization (it was bought by UnitedBioSource Corp). (randomizer.org is fine for generating a one-time randomization schedule, but it does not conceal the list until the proper time to randomize a participant. randomize.net charges $CDN2500 per trial.) Existing companies/sites seem to target pharma and be more expensive than (some) UCSF investigators can afford. The DDCC staff could help with delineating specifications, programming, and/or testing. Features such as permuted blocks, stratification, and concealment would be essential to include.

fyi, Forte Research Systems just (at the OnSemble Conference) announced that OnCore CTMS software ver12.0 will have real randomization capability and can use permuted blocks and stratification.

I see this as a great plug-in to REDCap (http://project-redcap.org/). Have you looked into the feasibility of making your portal compatible with REDCap? I think that you'd want to talk with Scott Robertson (Scott.Robertson@ucsf.edu). He could bring the idea up with the consortium and help to figure out if integration would be possible. It would increase your user base exponentially given the number of projects that are run using the REDCap platform.

Randomization support is clearly needed. While there are commercial offerings out there, your support for randomization design and simulation is added value. Ideally, this type of support would be integrated with clinical trial management systems, but short of that, I agree with exploring being a plug-in to REDCap. This would also be a great addition to our proposed Digital Clinical Research Center (http://accelerate.ucsf.edu/forums/improveresearch/idea/7054).

Adding access to something like RandomizationCentral.org is very important to the research infrastructure at UCSF. Multi-center study designs with small sample sizes need randomization to be centralized. The ability to produce block and stratified randomization designs would be an important feature of this randomization system and may not be available in current randomization websites/programs. This is a great idea!

Commenting is closed.

Internet Based Solution for Enhancing Patient Recruitment

Type: 
Proposal Status: 

Rationale:

Research that requires real-time recruitment in emergency department/acute care setting is labor intensive and costly. Relying on clinicians leads to low recruitment. Clinicians in busy acute care settings often find it challenging to screen and enroll patients for research studies while maintaining clinical priorities. This is further complicated by several different ongoing studies (often led by research faculty from outside departments) and it remains challenging to  keep clinicians current on the various eligibility criteria due to variable work hours.  While most emergency departments use the traditional approach of having a research assistant who screens for eligibility based on physician/resident notification, this approach has inherent problems: the cost and/or availability of research assistant coverage 24/7, variability among physicians in their notification of research assistants about eligible subjects, and the resulting patient ineligibility due to lack of timely screening. We believe that some of these drawbacks of the traditional approach could be overcome by use of an automated electronic screening tool.

Plan: 

We propose to examine the utility of an automated electronic clinical research screening technology to assess the comparative effectiveness of this method to the traditional system of research associates. This technology is currently installed at the University of California, San Francisco Medical Center Emergency Department (UCSF). We will utilize this technology in the setting of an ongoing NINDS clinical trial.

 

We propose to accomplish this in a pilot study by

(1) Demonstrating the available features of the automated tool using an ongoing clinical trial, the NINDS funded POINT trial.

(2) Comparing the performance characteristics of the automated tool to a traditional patient recruitment method for screening and enrollment.

 

Our specific aims are to:

Specific Aim 1: To identify and document study subject characteristics that will be used in developing a screening criteria for the automated tool (Phase 1)

Specific Aim 2: To demonstrate the performance characteristics and comparative effectiveness of automated real-time screening tool using the ongoing  clinical trial in the emergency department at UCSF (Phase 2)

Hypothesis: Automated screening will be more sensitive, and less specific than the traditional method of using research assistants in identification of subjects.

Specific Aim 3: To demonstrate that the operation of real-time clinical research eligibility screening meets HIPAA privacy requirements and does not compromise PHI.

Hypothesis: The technologies and architecture that is employed in developing the Patient Locate service will be sufficient to ensure that HIPAA privacy requirements can be met.

Study Design: In the first three months of the study, we will study the baseline characteristics and presenting symptoms of the eligible study subjects as well as patients enrolled in POINT. We will use the information to develop screening criteria which will be tested in the second phase of the study. During the second phase  KDH system will be activated to identify study eligible patients and the effectiveness of the tool will be compared with the traditional method of recruitment.

 

Metrics: Sensitivity, specificity, positive and negative predictive value, and likelihood ratios will be calculated to determine the screening performance of the automated screening tool system for eligibility. The performance characteristics will be compared to the research assistant supported system currently in use at UCSF. 95% confidence intervals will also be calculated to assess the strength of the point estimates.

Anticipated result and impact on trial enrollment procedures:

Upon the culmination of this study, the results will help our research team and the CTSI better understand the relative merits of the automated real-time patient recruiting system.

Budget and Justification: Salary support for the investigators and a research assistant for a 12 month period ($60,000)

Collaborators: John Stein (Co-PI), Department of Emergency Medicine, Claude Hemphill, Department of Neurology and Dan Carnese, KDH systems

 

Comments

I think this is a great idea. I've thought about this type of 'solution' as well, and one thing that you might want to look at is the 'fatigue' factor in clinical trials (and whether your solution will decrease recruitment fatigue). There's not much out there in the literature on site fatigue. When planning a multicenter trial this needs to be incorporated in the anticipated recruitment rate and decreasing fatigue would be an attribute that would increase the value of your product.

If/when you begin exploring such methods, I'd be happy to provide agent-based modeling and simulation feedback. Unfortunately, even though the benefits of such efforts can be great, a lot of work (and for some, a considerable learning curve) is required up front.

Thanks Anthony. I would like to learn more about the modeling and simulation feedback techniques and will contact you.

Thanks for the suggestion. The pilot will be single site study but the next phase will involve more than one center and we will study the "fatigue factor".

I like your idea. If you set up a fairly general process and show it's effective, it's likely that it could be rolled out to settings beyond the ED.

Is the patient recruitment tool already available at the UCSF ED? Could you provide more specifics regarding how it works? If a UCSF investigator wanted to collaborate, how would he/she input the specifics for their trial into the tool? What aspects of the tool do you propose to examine? Are you currently enrolling trials? A description of these trials might better explain how the tool works. How do you think the tool might be improved? What are the current challenges to HIPAA faced by the tool? Do you propose a method where patients can be shown recruitment flyers for current ongoing research at UCSF so they may contact researchers who are conducting trials where the patient may be qualified?

I will email my responses and will copy my collaborator John Stein who has been using this tool for his study.

outstanding idea!

Automated eligibility screening has been a "holy grail" for clinical research informatics for some time. There are a number of both academic and commercial systems for this. I assume this is the commercial KDH system that you propose to evaluate? Depending on the system particulars, and ancillary factors like workflow integration, the performance results may have limited generalizability. This evaluation of a commercial system could be a proposal to the mHealth/Digital Health RAP awards. http://accelerate.ucsf.edu/funding/rap

I think this is a great idea, especially for time-sensitive enrollment in the ED. There is considerable detail on the parameters that will be compared between the automated system and the RA-based system, but more detail on the actual pilot study would be helpful here to assess which eligibility criteria would be amenable to this system and which would not. Is the proposal to use this for a NETT trial? Which one?

Thanks Anthony. We will be using this for the POINT trial. The proposal has been revised to include information about the design and conduct of the pilot study.

This is of high interest to Med Center, SOM and researchers across UCSF. We plan on making investments in the coming year to ensure a realtime recruitment process and technologies are in place to improve just-in-time recruitment. Complemented with the new Recruitment Management System being developed, we envision this technical eco-system to be as advanced as UCSF"s standing. A study that would help shape these upcoming investments would be a welcome input and I think very worthwhile.

Fantastic idea, timely, and one with a tangible ROI.

This is a very timely proposal that will help us in making decisions on how to proceed to make real-time patient recruitment a reality for UCSF investigators.

Commenting is closed.

Developing and testing mechanism-based translational hypotheses

Type: 
Proposal Status: 

RationaleProblem: because of multiscale complexity, conceptual, mechanism-based, in vitro-to-in vivo mapping models are hard to falsify and can be flawed in ways that may not be obvious until challenged experimentally, which can be costly.  When the results of such experiments are equivocal or not supportive, the information needed to revise mechanistic hypotheses may be lacking.  Translation of in vitro phenomena to in vivo counterparts requires a mapping model.  Currently, mechanism-agnostic correlation models are common.  When translation is based on hypotheses about underlying mechanisms, the mapping model is almost always conceptual, often described in prose, supported by diagrams and, occasionally, mathematical models.  Solution: explicitly instantiate mechanistic mapping models in silico and explore their feasibility.  We envision concrete, computational, observable-in-action, experimentally challengeable mapping models that will evolve to become executable knowledge embodiments showing what does and does not translate under specific conditions.

The concept is illustrated in a recent paper (PMID: 20406856), “Tracing multiscale mechanisms of drug disposition in normal and diseased livers.”  We used, improved, and revalidated a multiscale in silico liver (ISL).  An ISL is an example of a new class of computational models.  We posited that changes in micromechanistic details from normal to diseased ISLs may have disease-causing, hepatic counterparts.  Together, the ISL and in silico methods represent an important step toward unraveling the complex influences of disease on drug disposition.  We demonstrated translating—morphing—a validated “normal” ISL into a “diseased” ISL.  Although ISLs are abstract software constructs, that transformation stands as a concrete, mechanistic hypothesis about what does and does not translate from one to the other.  The methods, which are new and novel, are designed to be extensible to whole organisms and, eventually, patients.  Being able to transform one validated ISL into another is important: it is evidence that the approach can be used to explore and challenge ideas about translation, making translational research more concrete. 

We envision “translational models” showing how, for example, in silico micromechanistic details are morphed between analogues of in vitro rat and human hepatocyte cultures, and in time between in vitro computational analogues and human analogues.  The morphing process will show what must be added and what is lost in translation.  A long-range goal for such morphings will be to provide an easily understood, mechanistic interpretation of how cause-effect relationships resulting from an experimental intervention in a wet-lab model are believed to manifest (or not) in a human analogue.  The expectation is that those relationships will have real world counterparts. 

Plan.  An essential precondition for achieving the above vision is to have two different, biologically related models (e.g., in vitro & in vivo) that have independently achieved validation targets.  We will focus on the above-cited ISLs and improved versions of in silico hepatocyte (ISH) cultures (PMID: 21768275): we will focus on translation of phenomena measured in hepatocyte cultures to corresponding, location dependent phenomena within hepatic lobules in rat and human livers.  We have designed the models so that hepatocytes can be exchanged: ISHs that have achieved validation targets under different culture conditions (ongoing during year one) can be plugged into an ISL (after its hepatocytes are removed).  The iterative refinement needed to reestablish whole-liver validation targets (including intralobular zonation) will provide a concrete theory of mechanistic attributes gained and lost in translation. 

Criteria and metrics for success.  1) Instantiation of quasi-autonomous ISH objects and documentation (a conference paper, e.g., the Winter Simulation Conference) that they can be transferred from one context (a simulated culture) to another (a simulated lobule) while retaining all the mechanistic features validated in vitro.  2) Produce a draft manuscript that demonstrates that an ISL with these new hepatocytes can also achieve whole liver phenomena, such as zonation of enzyme induction.  Produce a draft manuscript by year’s end for submission within the following four months. 

Approximate cost and brief justification.  Achieving 1 & 2 above will require several hundred cycles of iterative refinement (1 postdoc @ $48K working full time with Dr. Hunt) of in silico components (described in PMID: 20406856).  Simulation, publication, and meeting costs bring the total to $58K. 

Collaborators. Jackie Maher, UCSF Liver Center

Comments

The utility of this project/technology for assessment of liver targets in this ISH model. Can you comment on the generalizability/transferability of the work on this technology to other programs?

Adding to the reply above: in [Drug Devel Resh 72(2): 153–161, 2011] (URL below) we describe a vision in which analogues of different experimental systems, used by different programs, can exist within the same framework so that new knowledge about one system can be used to (automatically) update other analogues that use similar components. Clearly that is a distant vision, but there is a reasonably clear path forgetting there. http://onlinelibrary.wiley.com/doi/10.1002/ddr.20412/full

The utility of this project/technology for assessment of liver targets in this ISH model. Can you comment on the generalizability/transferability of the work on this technology to other programs?

The approach-our methods and model components-are designed to be generalized, to any other wet-lab, mammalian system used in research, and even to the same patient interacting with different forms of the same treatment (a small, in-progress project with the FDA), and that is reflected in the first presentation of these ideas in 2008 (see PMCID: PMC3041517). It proved necessary and essential to first show that the ideas can work on, specific, concrete problems, thus the liver focus. In addition to liver, we have used exactly the same methods to mechanistic insight into leukocyte behaviors, epithelial cell morphogenesis, and in vitro models of cancer. To insure that the methods are generalizable and transferable, it is necessary to be able to assemble complicated mechanisms (systems) out of simpler intuitive, easily understood, validated components that can be plugged together in different ways and in different contexts; and that it is easy to alter them for use in different contexts. Given that, it becomes feasible to explore what must be added to and taken away from an in vitro epithelial cell culture model to enable it to validate against several in vivo attributes of breast duct epithelial cells. We are not there yet (thus the application), but we are striving to move our models closer and closer to that vision.

-Can you explain in more details following terms: ‘mapping models’ and ‘mechanism-agnostic correlation models’ -It was not clear to me what the exact project will include and how that will be done. I understand that ISH and ISL models are already established. Will you be changing culture conditions and observe how ISH would change? -I think that the entire vision is great, but I was lacking practical details and how the entire process flows

I apologize for using specialized terms without adding definitions: I was striving to say too much within the 1-page limit. Mapping models (MM): the simplest MMs are scaling models. A MM is used to relate phenomena observed and measured in one experimental system (say a zebra fish) to that measured in a different system (e.g., a mouse). It is rare in experimental biology to have 1:1 quantitative correspondence for different biological phenomenon. In order for phenomena (say from a “proof-of-mechanism” experiment in cell cultures) to translate to patients, there must be a mapping model, a translational explanation (of course, there is always a risk that there is no MM: the cells in vitro simply do not use the same mechanisms as hypothesized counterparts in vivo). Often there is no MM; rather, there is a hypothesis (there is a mechanistic correspondence) that is short on details. A MM would begin to add details. When I revise the proposal, I’ll either provide a definition or avoid using MM. Mechanism-agnostic correlation models: correlations that make no claim about mechanistic linkage between the correlated phenomena. I can avoid that phrase too. Re: “It was not clear to me what the exact project will include ... Will you be changing culture conditions and observe how ISH would change?” Given the 1-page limit, I avoided details. When I revise the proposal, I’ll include statements, even though brief. Yes, ISH and ISL models are already established, but they are works-in-progress. A goal will be to validate ISH using data from at least two different in vitro experiments, replace current ISL hepatocytes, and then predict whole liver phenomena for which rat data is available. If our predictions are “off,” we will alter the ISH to achieve validation. How those hepatocyte mechanisms need to be altered (within the ISL) provide a testable hypothesis for how the in vitro hepatocyte mechanisms needed to be altered in order for the results to translate to in vivo. A simpler example would be the clearance of a drug measured in hepatocyte cultures. How do we translate that measure of clearance, first to rats, and then to humans? For an already-studied drug we can discover a morphing of ISH clearance mechanisms that enables them, when placed within ISL, to predict hepatic clearance it rats.

Commenting is closed.

A Clinical Research Toolkit for Surgeons in Low Resource Environments

Type: 
Proposal Status: 

Rationale

Less than 8% of orthopaedic research originates in low and middle income countries (LMICs), despite the fact that 95% of deaths from road traffic accidents occur in these countries.  Global partners of the Institute for Global Orthopaedics and Traumatology (IGOT) have asked for assistance in building the capacity to perform clinical research.  This is important because:

  • Research fuels advocacy, drives policy decisions and guides the allocation of resources in medicine.
  • The orthopaedic surgeons who address the burden of musculoskeletal disease in LMICs struggle with a lack of research resources, literature irrelevant to their practice and professional isolation. 
  • Researchers in resource-poor settings are essential partners to UCSF's goal to 'promote health worldwide'. 

This project will address these issues on three fronts:

  • Design a digital toolkit of research resources leveraging existing resources such as CTSI’s online research tools, the Department of Orthopaedic Surgery’s Research Bootcamp, and EpiInfo 7.
  • Select a platform that will function using the most commonly available technological resources, and integrate mentoring and support between experts and learners.
  • Evaluate program effectiveness and sustainability based on partner feedback and marginal costs, respectively. This evaluation will help to guide content, platform and operational improvements, before the project is made widely available in other settings (other LMIC partners as well as resource-poor and isolated locations in the U.S.).

Our goal in creating this toolkit is to build the ability of trainees and surgeons anywhere to plan, implement and publish relevant research.  We are piloting this program in the most resource-constrained environments in order to allow future scaling of the toolkit in sites that have access to more resources.  Our long-term goal is to establish a generalizable suite of tools akin to the resources available for students through the Khan Academy or MIT OpenCourseWare.

 

IGOT has completed interviews that explore the barriers to research in LMICs, and has used the subsequent analysis to tailor content that addresses these barriers. IGOT has also completed research related to the optimal format for content delivery in LMICs.  These initial steps lay the foundation to start work immediately on content and platform development.

 

Project Plan

  1. Select a format for content delivery based on available technologies that provides access to the widest range of users.
  2. Prove content validity of the toolkit’s curriculum through consultation with expert clinical researchers in orthopaedic surgery and with academic orthopaedic surgeons in LMICs (modified Delphi analysis).
  3. Create simple web-based tools such as webcasts or podcasts for each of the topics, a research question formulation tool and templates for creating proposals and budgets that will live on the IGOT website (www.globalorthopaedics.org).
  4. Create a research proposal forum that allows posting, feedback and partnering on projects.
  5. Pilot the toolkit in Lahore, Pakistan and Kathmandu, Nepal.
  6. Iterate the program based on feedback from the charter sites for future program expansion.
  7. Determine the overall program cost and marginal site cost to determine the expected return-on-investment of the program in the future.

Success Metrics

  • Improvement of methodological quality of thesis proposals after program participation based on the Journal of Bone and Joint Surgery (American) levels of evidence (http://www.jbjs.org/public/instructionsauthors.aspx#LevelsEvidence) based on an analysis of current and 2013 resident thesis projects
  • Website Analytics
    • New and Repeat Page Views
    • Length of View
  • Use of Toolkit – mandatory at sites after first class with resident and faculty champion established at each site.
  • User Satisfaction – Participants average satisfaction 75% based on post-program survey
  • Research proposals entered into global orthopaedic trial registry by each pilot site
  • Secure one pilot site in the Bay Area for implementation in the second year of the program

 

Cost and Justification

Estimated cost $45,000 composed of labor to analyze data held at the Institute for Global Orthopaedics and Traumatology (20%) as well as content generation and web development for the toolkit (76%) and project management (4%).  One of this project’s strengths is that it leverages existing content and will utilize a cost-effective digital delivery method.  This will allow the program to become sustainable and have a lasting impact after the initial investment in the program.

 

Collaborators

Syed Mohammed Awais (Dean, King Edward Medical School and Mayo Hospital, Lahore, Pakistan), Ashok Banskota (Founder/Surgeon Rehabilitation Centre for Disabled Children, Kathmandu, Nepal), Amber Caldwell (Director of Development, IGOT, UCSF), John Collins (Educational Studies, University of British Columbia), Richard Coughlin (Founder/Director, IGOT, UCSF), Richard Gosselin (Co-Founder/Co-Director, IGOT, UCSF), Harry Jergesen (Co-Founder/Co-Director, IGOT, UCSF), Saam Morshed (Clinical Trialist/Orthopaedic Surgeon, UCSF), Theodore Miclau III (President Orthopaedic Research Society/Founder and Director Orthopaedic Trauma Institute), Aenor Sawyer (Paediatric Orthopaedic Surgeon), Daniel Sonshine (IGOT research Fellow), Paul Tannenbaum (Web Developer, IGOT, UCSF), Angelique Slade Shantz (Project Manager, Orthopaedic Trauma Institute, UCSF)

Comments

Jesse, I think there is great potential for this project to assist with needed clinical research in rural and low income clinics in the US, as well as in developing countries. Piloting it in the challenging locations you have selected will insure the development of an investigative mechanism that will work well in a wide variety of settings.

Thanks for the suggestion Aenor. We have the goal of creating something that is relevant to those interested in using and producing research anywhere in the world. The toolkit won't compromise on rigor, but will try to identify ways that low-resource settings in both the developing world and in North America can still ask and answer relevant clinical questions. We added the goal of identifying a Bay Area partner within the first year to the metrics of success to reflect our hope to make our project more generalizable.

Overall, is an important project. The forum specifically designed to facilitate collaboration and to be a space where researchers can share experiences, ask questions and expect to find some technical answers is interesting and certainly in line with what visiting surgeons have requested. I imagine that this toolkit & portal could serve as a remote mentorship device in a mini-grant program to novice researchers amongst IGOT's global partners. I am curious regarding how the compilation of resources would be maintained and monitored? Who will respond to questions directed at IGOT or OTI?

Yes, we are aiming to create a place where people connect and learn from one-another. In this first iteration there will be a capacity-building focus. Our partners have expressed a need for learning modules for their centers (developing world academic orthopaedic surgery programs). The toolkit is intended to make the IGOT website a 'go-to' place for surgeons wanting to learn more about research methodology. After the training program is established and we have a cohort of graduates we plan on transitioning our focus towards enabling collaboration between researchers. The success of the toolkit will provide us with the credibility to fund that expansion of our scope. I think that second phase would be the perfect environment for the 'micro-grant' program. We know how little it would cost to hire research coordinators for our partners based on a survey we just completed. We already have plans to develop an 'angel network' of donors to support the program. As far as responding to questions from participants, we have built mentoring into our proposed toolkit. We have 7 collaborators with significant research experience ready to act as mentors. We are going to try and make that relationship real-time through Skype video calls with email and blogs as back-up. As the program expands we will add more mentors outside UCSF, potentially even 'graduates of the program who want to 'give back'.

It's exciting to see a proposal that links orthopedic research and global health. The lack of trauma transport and services at medical centers is a substantial problem and this proposal attemts to address it from an educational perspective. It is already strong but could be better linked with the increasing distance learning projects in CTSI. The GHP has focused to begin on research administrative staff but other projects involving research training (Jeff Martin) could be used in this project. Also, the CTSI GHP is closely related to UCSF GHS which should also be receptive as it develops more educational programs. Does general surgery have projects that could be linked? They'd be helpful I'd expect in trainings in the soft tissue injuries that occur along with bone injuries in traffic trauma.

Thanks for your comments Paul. I have contacted Jeff Martin and we have started to figure out how best to leverage the existing content and tools available through CTSI/TICR. There was a thought that our mentors could lead virtual small group sessions within the curriculum of TICR to specifically address barriers faced by low-resource sites. I have also discussed this idea with Jaime Sepulveda of Global Health Sciences, and he is looking at our proposal. I also contacted one of our collaborators in general surgery, Rochelle Dicker, who is interested in Global Health. She has assured me that there would be interest and support in the creation of the toolkit.

Commenting is closed.

Improving Capacity for Translational Research in Tuberculosis

Type: 
Proposal Status: 

Rationale: The purpose of this application is to strengthen the tuberculosis (TB) translational research capacity at UCSF/SFGH by developing a BSL-3 laboratory in which work with human TB can be safely carried out in tissue culture and in small animal models.  Such work will enable us to extend observations that we have made in molecular epidemiological studies of TB in San Francisco, permit new lab-based investigation in the context of existing, funded clinical studies in San Francisco and abroad, and facilitate additional studies in animal models of infection. Having such a facility will enable us to examine immunological responses to as well as the pathogenicity and virulence of clinical strains of M. tuberculosis (M.tb) isolated from patients in San Francisco and in other areas across the US and globally, and to study the impact of co-infections with M.tb and other pathogens (e.g., HIV). We have a superb multidisciplinary group of investigators working in the US as well as in Kenya, Tanzania, Zimbabwe, Uganda, and Vietnam in studies of the transmission and pathogenesis of TB. In a number of these studies, observations have been made that suggest differences among strains and interactions between specific strains and hosts of specific race/ethnicity. Examination of these findings in lab-based studies and/or in an animal model will, if confirmed, enable back translation to human studies and approaches to vaccine and drug development. The availability of a new BSL-3 will facilitate the work of investigators working across many disciplines and enable questions to be addressed that can only be examined in animal models. Given the international scope of existing research activities, this effort will also mesh with efforts at UCSF and SFGH to enhance global health. In recent efforts to recruit an investigator focused on TB immunology, the major limiting factor has been the lack of BSL-3 space. Thus, having such a facility is a critical element in filling a critical deficiency in our TB research program.

Plan: Although a BSL-3 laboratory exists in Building 100 at SFGH, this laboratory is heavily utilized, inadequately designed, and not amenable to the maintenance of M. tb infected animals. Other alternatives have been carefully explored but deemed not feasible for reasons of experimental protocol, safety, and/or cost. We propose to upgrade an existing BSL-2 laboratory in Building 3, a seismically sound building, to BSL-3 status. Based on input from the construction firm that built the existing space, the total cost of the build-out of the BSL3 is estimated to be $1.2M. There is widespread enthusiasm and broad support to create such a facility, as indicated by commitments that total $950,000 from a consortium of funders including the Department of Medicine, the Medical Service at SFGH, the SFGH Dean's Office, and the Ireland Fund, but there remains a $250,000 deficit. We are asking for $100,000 to help leverage these commitments. Given our success in raising the needed funds, we are confident that the additional $150,000 will be secured, even if it means obtaining a loan that would be paid back by users over time.

Criteria and Metrics for Success: The single metric will be the hiring of one independent TB immunology investigator by 2013. The Division of Experimental Medicine will commit the resources for at least one start-up package to hire faculty members focused on the immunology of human TB. Each package would include at least $1M in start-up funds and the provision of BSL1 and BSL2 space ensuring that, if the BSL-3 is built, the criterion of success will be met.

Approximate Cost and Justification: We are requesting $100,000, an amount that will be combined with additional resources that have already been committed (see above). These aggregated funds will be used to purchase or construct: 1) animal holding space with self-contained cage ventilated racks for mice or guinea pigs; 2) several biosafety cabinets for the safe handling of M.tb in tissue culture; 3) autoclave alcove to house the autoclave and the CO2 gas manifold; 4) clean vestibule (entrance and exit); 5) dirty vestibule (entrance to both of the laboratories and the animal holding room); and 6) ultralow freezer for storage of bacterial stocks and infected tissues. The space will have the mechanical system required for a completely independent ventilation system.

Faculty who would use this facility: Hopewell, Havlir, Kato-Maeda, Catamanchi, Davis, Metcalfe, Nahid, Everett, Cox, McCune, Nixon

Comments

It's surprising to hear that the TB research at SFGH is happening in marginally adequate facilities. San Francisco has the highest incidence of TB of any city in the United States, and most of these cases are seen and treated at SFGH. Given the power of animal models and the increasing attention being paid to human immunology by the NIH, it seems that having a state-of-the-art facility at such a location would be a good investment in UCSF and San Francisco's futures.

The tuberculosis researchers at San Francisco General Hospital have worldwide renown. SFGH has a tremendous history of providing care for patients with tuberculosis and advancing knowledge in treatment, prevention, diagnosis, and epidemiology. As the Hospital Epidemiologist and an infectious diseases physician at SFGH, I am keenly interested in seeing tuberculosis research at SFGH proceed in an appropriate setting as described in the proposal.

SFGH and its infectious disease researchers and clinicians create a key and unique interdisciplinary team capable of making SFGH a leader in TB research. The addition of a new BSL-3 facility will not only expand the available space to work with this serious public health threat, but will enable basic research using small animals. The inclusion of TB basic research in small animals is necessary to recruit TB basic immunologists and will without a doubt expand the role of SFGH in basic and clinical TB research.

This is an incredibly important resource for the SFGH/UCSF research community.

This would support a still growing community of TB investigation and could be helpful in recruiting new faculty members as well. Are there any facilities that approach BSL3 level containment at any of the other campus sites? I suspect not and if that's true, could investigators not located at SFGH collaborate in work in this new space? And help support it?

The establishment of a BSL3 facility for TB research would be a fantastic and logical addition to the existing human immunology infrastructure at SFGH and would help to bridge the gap between translational scientists and the excellent clinical TB researchers at SFGH. Several SFGH researchers have established valuable international cohort studies that are ripe for immunologic collaborations. Such a facility would extend UCSF's existing strengths in human translational immunology and plug a major hole in our research infrastructure, by enabling tissue culture and small animal model studies of TB. This facility would be used by many existing UCSF faculty and its creation would also be a critical step in facilitating recruitment of a faculty member with expertise in human TB immunology.

A much needed BSL3 that will strengthen and add new dimension to TB research. CTI participation in this endeavor will leverage the already committed money for clinical and translational investigation.

I am a junior faculty member involved in human studies of tuberculosis. In particular, many of my studies are focused on development and validation of immune-based diagnostics. A new BSL-3 facility to support animal studies would help: 1) recruit a world-class TB immunologist to UCSF/SFGH 2) test hypotheses generated from human studies in animal models and vice versa Both of these factors are critical to the future success of TB-related research at SFGH, and would directly benefit the ongoing studies of a number of faculty members.

I was disappointed to learn that despite the presence of TB experts and significant TB research being conducted at SFGH, there is no BLS-3 laboratory for TB research. This award will be a valuable first step towards addressing this glaring deficiency.

The history of TB is intertwined with that of San Francisco General Hospital and there is a strong core of researchers studying TB who are based at SFGH. The establishment of a BSL-3 facility at SFGH would expand the research capacity at SFGH and would help attract additional scientists and grant funding.

UCSF is internationally recognized as one of the top academic centers with expertise in TB management, education and research. Moreover, the pool of TB researchers at UCSF and SFGH is perhaps only paralleled by the group at Hopkins. Not having a BSL3 at SFGH is a major hindrance to the university's ability to expand TB and HIV/TB related activities. I would anticipate that the cost of a BSL3 will more than adequately be covered by the many extramural funds that the group would be able to successfully compete for.

It is surprising that SFGH, which is positioned at a convergence of clinical and academic research, currently lacks the facilities to perform such experimentation. The limitations of the building 100 BSL-3 create an unnecessary bottleneck for tuberculosis research that forces an excessive dependence upon outside sources to complete experiments. This investment will both alleviate a weakness and reinforce UCSF’s and SFGH’s prominence as premier institutions for tuberculosis management by providing the necessary tools to remain competitive and attract world class researchers and clinicians.

The new BSL3 facility will overcome a critical obstacle to building the Mtb program at SFGH and UCSF. As Chief of the Pulmonary Division at SFGH, I can attest to years of inability to recruit a TB immunologist, mainly due to the inability to study Mtb-infected mice. Now, a high level recruitment is in process, only dependent on solving this problem. Much of the money has been secured and this $100k would be a key final investment.

As a junior faculty member focused on clinical-translational work in tuberculosis (TB), I offer a strong endorsement of the stated needs for a well-equipped BSL-3 facility to allow UCSF to expand its capabilities in TB research. My current nascent studies of MTB host-pathogen interactions in vivo would greatly benefit from enhanced local expertise and interest in studying these interactions in model systems, research that is infeasible without the prooposed facility. TB affects 3 billion people worldwide, yet the understanding of the complex immunology related to this infection remains rudimentary. Having the facilities at UCSF to carry-out experiments to understand this process is critical to advancing our knowledge of this area, so that we can design new and better diagnostics, drugs, and vaccines.

I agree with my coolleagues comments above. I wish to add that I have plans to collaborate with another UCSF colleague that studies TB, and our work would greatly benefit from having more accessible facilities. I am very hopeful that these facilities become available, especially given all of the multidisciplinary research that could benefit from these.

Tuberculosis remains a major global health problem; one of the components of the global Stop TB Strategy is research including new diagnostics, drugs and vaccines. This CTSI proposal would help strengthen UCSF researchers' capacity to contribute new knowledge to address this ancient disease.

This is a very important proposal for enhancing research in lung disease and global health at UCSF. There is a growing community of scientists studying tuberculosis at San Francisco General with the potential to recruit another senior international leader in this field. There is also great potential to tap into the growing strength on the campus in translational infectious disease research. Currently, scientists at SFGH are severely limited by the absence of an accessible BSL3 facility. There is broad support for this proposal from pulmonary scientists at UCSF. Funding from the CTSI now would be critical for raising the last increment of funding needed to get this project completed.

I am a junior faculty member in the Division of Pulmonary and Critical Care Medicine and undertake clinical and translational research focused on transmission and pathogenesis of M.tuberculosis. A safe, improved BSL-3 facility to undertake sophisticated M.tuberculosis research will be critical in strengthening our current capacity to carry out cutting edge research. In addition, such a facility would greatly reinforce our ability to recruit important faculty, such as an immunologist working at the human-M.tuberculosis interface. This proposal has my complete support.

This is an important proposal and if funded would address a major infrastructure and resource need at SFGH and UCSF. The global phylogenetic diversity of MTB was recognized in large part as a result of population-based molecular epidemiology and genotyping studies conducted at UCSF/SFGH. The implications of MTB strain diversity for global health such as for transmission, pathogenesis, treatment response, diagnostics, vaccine efficacy etc are largely unknown. With an adequate BSL3 at SFGH and with local collaborators, we would be able to augment our population-based studies by evaluating the relevant host-pathogen interactions involved. Until then, the only way we can move this important TB research agenda forward is to seek collaborations with investigators at other universities where adequate facilities exist (OHSU, Hopkins, Berkeley, Stanford), asking them to consider conducting research that we should be doing at SFGH.

Availability of the BSL3 has been critical in allowing me to pursue TB diagnostics research here at SFGH. The clinical laboratory does not have the space nor ability to support the kind of work that I am doing. It has been incredibly helpful to have the BSL3 available and continued availability will be a key part of sustaining active and cutting edge TB research at SFGH.

UCSF and SFGH have established themselves as leaders in the field of TB research. Investment in infrastructure like this lab is absolutely critical for UCSF to maintain its position in this field.

Commenting is closed.

“Expedited” Expedited CHR submission and approval for Chart Review Research (Category 5)

Type: 
Proposal Status: 

Title: “Expedited” Expedited CHR submission and approval for Chart Review Research (Category 5)

 

Rationale: Clinical researchers often perform retrospective, chart-review studies as a relatively inexpensive and quick way of determining which clinical questions are worth pursuing before engaging in more expensive, time-consuming prospective studies.

 

Chart-review studies also play a critical role in developing and generating new scientific hypotheses, and their role in the hierarchy of research methodologies is indispensible.  For junior researchers, who often do not have much in the way of funding or protected time to do research, chart-review studies provide an essential way to get early experience doing research. 

 

Chart review studies qualify for expedited CHR review under federal regulations, 45CFR46.110, Category 5.  Currently, however, the time and effort required to go through the process of expedited approval is substantial and serves as a deterrent to performing this type of research at UCSF.  The current UCSF CHR form for an expedited chart review (Category 5) study is 20 pages long and empirically cumbersome to fill out.  An estimated 80% of the forms have to be returned to the researcher by the CHR for revision, and usually there are multiple iterations between the researcher and the CHR to correct the form before approval is given.  This back-and-forth is time-consuming and frustrating for both researchers and CHR staff, and slows the pace of research at UCSF.  While the CHR makes every effort to perform these reviews in an expedited fashion, the median time-to-approval for (all expedited categories) is 32 days, which is significant for a researcher who may have only a month or two protected to perform research. 

 

 

We propose to simplify and truly streamline the process for obtaining “expedited” CHR approval at UCSF for research projects that involve no-subject contact and are confined strictly to chart review (Currently under Category 5 for expedited CHR review).

 

It is noteworthy that in some jurisdictions, such as the United Kingdom, non-subject contact chart review to conduct an audit to reflect on one’s own practice is encouraged by the physicians licensing authority as an adjunct to continuing practice rights. Reflection on practice is an important part of continuing practice improvement. What is proposed here is, therefore, completely consistent with comparable demands around such activities in other places.

 

Plan:

 

In a direct collaborative effort between UCSF clinical researchers from various specialties and the UCSF Human Research Protection Program staff, we plan to develop a highly streamlined online application form for applying for CHR approval for Category 5 chart review studies. With this unique collaboration, this project is “shovel-ready.”

 

The questions on the form will be clearly written and focus-group tested to ensure they are easily understood even by novice researchers.  This is to ensure that the form can be completed quickly, optimally in less than fifteen minutes, and accurately so that iterations wherein the CHR has to return the proposal to the researcher for corrections are minimized.  The streamlined form will also be easier for the CHR to review, which should allow for a truly expedited review process and approval.  Internally, the CHR plans to have these forms handled by a consistent small group of analysts so that they can build familiarly with the forms, trouble-shoot efficiently, and keep approval times down. The form will also have clear and simple screening questions to ensure that such proposals truly do fall under the Category 5 designation.

 

 This proposal will satisfy the Federal Common Rule that requires the conduct of human research receive IRB approval prior to the initiation of the research, as well as the HIPAA requirement that a privacy board, in this case the CHR, reviews the study proposal prior to using Protected Health Information for research.

 

As part of this project we will develop a new reports within iMedRis that can provide the average and median number of iterations (submission rounds) required for approved research, with filters for application type, submission type, type of research, type of funding, time period, investigator, IRB panel, and HRPP analyst.  We will also enhance other reports of time-to-approval, and add a measurement of the time it takes investigators to prepare a CHR submission.  With these reports we will be able to compare data from current Expedited Category 5 approvals, to data from the new form we generate, and measure improvements in several ways.  Note that these reports will be permanent and can be used to provide UCSF better data on the IRB submission process for all research, including clinical trials.

 

 

Criteria and metrics for success:

Our proposal will have succeeded if at the end of the one-year implementation period we have:

 

1. Developed a concise online form that has been focus-group tested on clinical researchers and approved for use by the UCSF CHR for the purpose stated above.

 

2. Completing the concise form takes no more than fifteen minutes (on average), even for a novice, junior researcher (e.g. resident or fellow) in our focus-group testing.

 

3. There is a ≥50% reduction in number of iterations required to get CHR approval for these studies compared to current baseline for Category 5 expedited review studies (baseline iteration data to be generated by new reports within iMedRis). This would also have the indirect benefit of freeing up CHR staff time to focus on other categories of applications, hopefully improving approval times overall.

 

4. There is a ≥50% reduction in time to approval (measured in days) compared to current baseline for Category 5 expedited review studies (baseline iteration data to be generated by new reports within iMedRis). 

 

5. We have generated a report system that can be used by the UCSF CHR to provide better tracking data on the IRB submission process for all research done at UCSF, including clinical trials. 

 

Total Budget: $16,574

Salary support for project leader and research coordinator; two stipends for clinical research collaborators ; printing costs and supplies; focus-group participant compensation.

 

 Collaborators:

 

Amy Gelfand, MD (PI) : Dr. Gelfand is a child neurologist and a clinical researcher in pediatric headache. She will serve to oversee the project, including coordinating with the HRPP staff and clinical researcher collaborators, running the monthly research team meetings, and submitting study progress reports to CTSI.

 

John Heldens, HRPP Collaborator: As the director of the Human Resources Protection Program at UCSF, Mr. Heldens will oversee the iMedris and HRPP staff and supervise the form changes.  He will vet the new mechanism of CHR documention with Privacy, Legal, HIMS, and IT security.

 

Vanessa Jacoby, MD: Dr. Jacoby is a clinical researcher in OB/GYN at UCSF. She will attend monthly team meetings and give feedback on the form questions as they are being written and implemented.  She is familiar with the needs of clinical researchers who work with vulnerable populations (i.e. pregnant women), and who perform surgical procedures as well as outpatient and inpatient clinical care.

 

Vanja Douglas, MD: Dr. Douglas is an adult neurologist whose clinical research focuses on inpatient neurology.  He will attend monthly team meetings and give feedback on the form questions as they are being written and implemented. He is also a Journal Editor so will ensure that the needs of journal editors to be assured that research studies they are publishing received appropriate CHR approval are being met by the new submission and approval process.  

Comments

Are these studies expedited category 5 (existing data)?

Yes, for existing data that was collected as part of clinical care.

Yesterday, a hand surgeon at another UC Campus shared his IRB experience setting up a study to test the hypothesis that patients with artificial fingernails had an increased risk of developing surgical site infection. He developed a consent form, submitted an IRB and currently is consenting patients to participate in a research study. This is an example of a quality improvement study. All he had to do was add a question to his intake form indicating whether the patient had artificial nails. When he accumulated a sufficient number of cases, he could then apply to his IRB for a category 5 exemption to perform the study. I invite you and your colleagues to collaborate in the "Barriers to Patient Centered Outcomes Research" initiative. You could write a registry protocol that defines the parameters for the condition you will study. If the focus is improving the quality of patient care, your registry will be denied CHR review. This means that the CHR has reviewed your protocol and deemed yours is a quality improvement activity, not research. Once your registry is established and you have specific research questions, the the amount of work required to submit an expedited review in iMedris is not much more that writing a research protocol.

Great idea! This should really improve idea-->study translation times and remove an important barrier at relatively low ethical risk. I assume you'd have some screening questions to weed out studies where patients were going to be contacted, sensitive issues discussed with other physicians, etc?

This is a very good idea that would help reduce administrative barriers to this type of research. There will be details to be worked out, especially with regard to the privacy laws, but I think it is ultimately a very doable -- and important -- project. I would like to see RKS partner on this project, helping to move it through the various regulatory oversight groups that will need to be brought into the process.

This is a terrific example of how a relatively small change can have a big impact. This would reduce time spent preparing and reviewing CHR applications, freeing faculty and students to spend more time on research, and freeing CHR resources to make improvements in other areas. This is very feasible and we would be thrilled to help.

Commenting is closed.

Evidence-based Inputs for Sample Size Calculations: A Web-based Application

Type: 
Proposal Status: 

Rationale:  Inaccurate sample size estimation leads to research studies that enroll too few or too many study participants. The former can result in failure to demonstrate anticipated effects and the latter to excessive costs, due to overuse of research participants and personnel time. While there are several important components of a good sample size calculation – including a compelling research question, an appropriate outcome variable, and an efficient study design – here we focus on improving the accuracy of the quantitative inputs. Examples of such parameters include:  

  • the variability of responses within and between eligible participants (accounting for correlations among replicate measurements at a given time and/or at different times),  
  • the mean response under standard-of-care conditions, and
  • the prevalence of the disease under study.

At present, the primary sources of parameter values are pilot studies (often too small to provide reliable estimates) and published studies (which often lack the needed values). Estimates of parameter values that are not evidence-based can introduce a large amount of error into the sample-size calculation, making it precise but inaccurate. The proposed web-based application would allow a clinician and biostatistician to identify required values in real time during a consulting visit, revising the query as needed to ensure a viable and cost-efficient study.  Given the enormous number of medical studies launched every year, access to accurate inputs to sample size calculations could vastly reduce waste of valuable resources.

 

Plan: The two key ingredients to obtaining a wide range of evidence-based inputs for sample size calculations are the availability of electronic sources of current health information and a convenient means of retrieving relevant information from the databases. During pilot funding, we will demonstrate feasibility of (1) identifying relevant existing databases; (2) quantifying improved accuracy of sample-size calculations based on of evidence-based parameters (as defined here) relative to those based on other sources of inputs; and (3) producing version 1 of a user-friendly web-based interface to access, summarize, and output evidence-based parameters in useful formats. Beyond pilot funding, additional databases could be tapped, and the breadth of access, summarization, and output could be expanded. 

Aim 1:  Identify Databases:

  • Ideal databases should provide longitudinally tracked individual-level health outcomes related to a wide range of conditions. Two excellent examples appear to be (1) Databases included in the UCSF CTSI Large Dataset Inventory, accessible at no cost through http://accelerate.ucsf.edu/research/celdac.  (Example: National Health Interview Survey) (2) The Kaiser healthcare database. [Must: Identify key personnel who could provide access, engage their interest, and establish an agreement to collaborate.]
  • With clinical partners: Establish a set of commonly expected requests that could be used to evaluate candidate databases on quality and availability of desired data.  
  • With database partners and computer experts: Optimize access to and manual retrieval of information.

Aim 2:  Proof of Concept  

  • Identify commonly used outcome variables and alternative choices through review of the published literature and discussion with clinical colleagues.
  • Identify a selection of recently published high-profile studies that reported the values of key study design parameters. For each: (1)  Document the eligibility criteria and parameter values that were used in the study design. (2) Document corresponding values drawn from the selected databases. (3) Examine the effect on the sample size of differences between the two sets of parameter values.

Aim 3: Create a web-based user interface, a manual of procedures with useful examples, and useful export files.

  • User experience:

          o   Make retrieval fully dynamic: any outcome stored in the database; retrieval tailored to major eligibility criteria (e.g., age and diagnosis) and design criteria (e.g., frequency of assessment per patient).

          o   Use drop-down menus, populated by database-specific data dictionaries, to ensure accurate spelling.

          o   Produce results that are easy for the investigator or biostatistician to manipulate. The software will process all (or a random sample of) eligible values to generate rates (see example appended), means, variances, and correlations, as needed. 

          o   Generate downloadable documentation of queries for user’s later access.

  • Developer experience:

         o   As queries of a database are made, record queries (including search criteria) and results.

         o   Identify unavailable measures; examine reasons. Automate or prompt search for an alternative database and/or measure.

 

Criteria and metrics for success:

Aim 1:  Compare the proposed database resources in terms of features, data quality, and costs: 

  • Are Common Data Elements used?
  • Is a data dictionary available for sorting and browsing to find measures available?
  • Does resource have the requested measures?
  • How current are the measures?

Aim 2:  For a range of recently published studies that reported the values of key study design parameters, evaluate the effect on the sample-size calculation of differences between reported parameter values and values retrieved from our proposed database resource(s).  Hypothesis:  Evidence-based values will modify the sample size calculation by at least 10%.   

Aim 3:  Compare the proposed database resource(s) in terms of ease of access and value of information gained: 

  • Poll users to obtain feedback on ease of use and value of information retrieved, by database.  
  • Summarize measures queried by frequency. 
  • Characterize variation among databases with respect to comprehensiveness and ease of use.
  • Estimate personnel costs associated with building database access.

 

Cost:  We seek funding to access at least two large free databases by leveraging the CTSI Large Database Inventory [Aim 1], to quantify the benefit of evidence-based parameter values on sample-size calculations [Aim 2], and to plan the computational work in fine detail [Aim 3].  Salary support $100,000 (12 months) for principal faculty and staff.

 

Collaborators: Joan Hilton (Epidemiology & Biostatistics) will lead the biostatistical aspects of the “App” development.  Tracy van Nunnery (Medicine) will lead a team of computer experts who will create the database interfaces. Kirsten Bibbins-Domingo (Medicine) will serve as lead clinical collaborator.

 

Example interface and output:  Dr Hilton and Mr Nunnery have on-going research collaborations that began in 2007. Mr. Nunnery and his team created HERO, the electronic medical record system used at Ward 86 of SFGH, and thus are well acquainted with HIPAA requirements. As an example of their work, users (clinicians) can query HERO to obtain the distribution of any patient characteristic captured by clinicians, limited to user-specified search criteria. The web-based interface for retrieving demographic data is shown below, with the date fields displayed (upper image).  The distributions of demographic characteristics were exported to an Excel spreadsheet (lower image).

Comments

Sounds like a good idea - Consultation Services has been interested in developing a systematic, formulaic and more automated approach to power calculations and we would be interested in working with you on this. However, will you be able to obtain good information on distributions and prevalence on enough measurements and within enough subgroups of the population from the datasets you're proposing to be broadly useful? It seems like the possibilities for what might be needed are endless, and the dataset resources limited.

Thanks for your support! I agree that finding a great database up front would make the most efficient use of our time; selecting the first database to work with is Aim 1. Input from my clinical collaborator will help ensure the initial resource is useful in a very wide range of UCSF-type research applications. Your input is most welcome too. Got database suggestions???

The quantities to be estimated may have some potential usefulness for study design, but I unfortunately believe that the proposal as envisioned would be building on an untenable foundation--the conventional power-based approach to choosing and/or justifying sample size. I don’t think that facilitating power calculations will provide any actual scientific benefit (sorry Mark), although there may be practical benefits. The proposal’s rationale presumes a meaningful pre-study definition of “too few” participants, which does not actually exist and is what I have called the “threshold myth” (see http://www.ctspedia.org/do/view/CTSpedia/SampleSizeFlaws#The_threshold_myth); it also seems to presume that poor cost efficiency can be avoided without consideration of the actual costs, which I believe is unrealistic. In practice, investigators must consider cost and feasibility in choosing sample size, and calculations are often just window dressing for choices based on other considerations (see http://www.ctspedia.org/do/view/CTSpedia/SampleSizeFlaws#Erosion_of_scie...). I therefore believe that this would have little impact on actual sample size choices. To the extent that it would influence the rare cases where choices are based only on the conventional power-based approach, those choices will not necessarily be improved, because the arbitrary goal of 80% power has no valid justification and is not necessarily optimal. More accurately estimating something that is not meaningful will not make it more meaningful. In addition, a very influential input is the difference to be “detected”, and this apparently will not be addressed. This is often the hardest to specify because 1) uncertainty about it is presumably large enough to warrant the study being proposed, and 2) the theoretical basis for choosing it is unclear (see http://www.ctspedia.org/do/view/CTSpedia/SampleSizeFlaws#Inherent_inaccu...). A practical approach to this problem is to calculate what difference will produce 80% power for the proposed sample size (which was chosen for other reasons). Perhaps this could be integrated into what is envisioned. This would be of practical utility for justifying sample sizes in proposals using power-based conventions. I think this could have some value for making conventional calculations (the window dressing) easier and making them seem more objective and accurate. The proposed estimates could also be used in the approach described at http://www.ctspedia.org/do/view/CTSpedia/SampleSizeFlaws#Sensitivity_ana..., instead of in conventional calculations. This approach, however, is not widely used, and the proposal does not seem geared toward any such use. Regarding Aim 2, inaccuracy of estimated inputs has been empirically documented already, and 10% impact on sample size is much too precise a standard (see, e.g., Vickers AJ: Underpowering in randomized trials reporting a sample size calculation. Journal of Clinical Epidemiology 2003, 56:717-720; more than half of high-profile RCTs had >2-fold inaccuracy). I would focus instead on documenting improvement in accuracy from use of the proposed database(s) and subset selection tools, along with the level of accuracy that they can attain. This seems like the key proof of concept that is needed (regardless of what use the estimates would be put to). I would undertake this initial validation before doing any work on usability and a user interface, because such initial results might be discouraging.

Peter, I appreciate that you too care about the quality of sample-size calculations and have gone so far as to explore the topic through methodological research, as documented in the online links you provide here. Thank you for your comments on my proposal. I number and respond to each of your three points below. POINT 1. "Investigators must consider cost and feasibility in choosing sample size."    RESPONSE: Whereas the text above roundly criticizes the “conventional power-based approach” and “the arbitrary goal of 80% power,” neither of these concepts appears in my proposal. I appreciate the nuances of sample-size calculations and have proposed that several important issues are better handled through collaboration between an investigator and a biostatistician. In addition, I identify overenrollment of participants as a problem, not just underenrollment. In its introduction, your website lists “three crucial flaws in [the] standard approach,” the second of which is, “relies strongly on inputs that generally cannot be accurately specified.” I believe our proposal directly addresses this very problem. We welcome you to join this project, if you wish to also pursue practical solutions.    POINT 2. "The difference to be detected” RESPONSE: I agree that the difference to be detected is a very important quantity. Because of its special status, I did not address it (or other “important components of a good sample size calculation – including a compelling research question, an appropriate outcome variable, and an efficient study design”). Some of my thoughts on this parameter follow. • As an example, in the setting of a randomized controlled trial designed to detect the superiority of a new therapy over a standard therapy, the value of interest may be the minimum clinically important difference (MCID) between arms – a value large enough to warrant inclusion of the new therapy among therapeutic options available to clinicians and their patients. • For a new therapy, person-level data will not exist in the databases I propose to tap, thus this is not a source of information about the MCID. Guidance from other resources could be obtained, such as from the literature published to date and preliminary studies. The combined efforts of a clinician and biostatistician during a consulting visit could be very fruitful in honing the value selected. • Importantly, the difference to be detected is a function of an outcome variable, the variation of which can be estimated from the databases I propose to tap. In turn, the variance of the difference can be estimated, accounting for correlation among repeated measurements within individuals. At present investigators have very little information to guide selection of this quantity; availability of evidence-based inputs would be a substantial leap forward.    POINT 3. "Regarding Aim 2" RESPONSE: I proposed a simple measure of the accuracy of evidence-based inputs: comparison of the sample size calculated using conventional inputs (e.g., based on literature search) with that using the proposed inputs. • To address point 3.3, the proposed inputs rely on the quality of the database; hence we propose to select the database first. • To address point 3.1, improvement in accuracy also relies on the quality of the conventional inputs and will vary across medical specialties, outcome variables, relevant patient populations, and many other features of the data. Accuracy gains should be higher in less charted territories. Thus a challenging test of improvement should be conducted using well characterized outcomes, such as those used in cardiovascular research, where anticipated gains are relatively small. • To address point 3.2, in addition to summarizing accuracy improvement by meaningful thresholds that would emerge as results unfold (10% should be seen as a place-holder), we would estimate accuracy on a continuous scale. Clearly, this “study” must be designed as carefully as any other. • Importantly, even when high quality, parameter estimates are available through conventional sources (i.e., when accuracy is high), if the investigator wants to change eligibility criteria relative to those sources, the parameter estimates also may change – unpredictably. Availability of evidence-based inputs would be a substantial leap forward.

Point 1 is the key one, so I will respond just to that. 1. If the proposal is to support calculations other than the standard power-based calculations with the minimum goal of 80% power, then this should be made clear in the proposal and explained in some detail. The inputs discussed are those used in standard calculations, and I see no reason to anticipate that the proposed product would not be used exclusively for standard calculations. I believe that an important consideration is that for any alternative approach, the primary issue for now is trying to make it acceptable to reviewers and feasible for investigators, rather than any refinements that this proposal might implement. I therefore think that this will mainly be useful for supporting standard calculations. As I said, this may have practical value (specifying inputs is a major headache, and choices are often challenged by reviewers), but I believe that the drawbacks of the standard approach will limit the actual scientific value that this could produce. In particular, even if this could completely resolve the problem of inherent uncertainty (which seems doubtful to me), the other two fundamental flaws in the conventional approach are still there. I have no interest in supporting the standard approach.

Commenting is closed.

Translating an Effective Systems-Based Chlamydia Screening Intervention For Australian General Practitioners in New South Wales

Type: 
Proposal Status: 

Rationale Chlamydia trachomatis remains the most commonly reported bacterial sexually transmitted infection among U.S. females between the ages of 15 and 24 years.1 Most infections are asymptomatic and if untreated can cause major reproductive morbidity.2 Screening for chlamydia has been shown to be cost-effective3 and is recommended for all sexually active female patients 24 years or younger.4 Yet, improvements in screening rates have been small.5 We developed a systems-based intervention to identify and screen at-risk sexually active adolescents which, in a randomized control trial, resulted in dramatic increases in screening rates in a large Northern California Health Maintenance Organization (HMO).6-8

The Chlamydia infection rate in Australia has tripled over the past 10 years, with New South Wales having the second highest rate of infection in Australia.9 As a result, an international systematic literature review was conducted to examine chlamydia prevention programs that have published sustainable and significant increases in routine screening within the primary care setting. The review identified 23 full text articles that met the inclusion criteria, 4 review articles were excluded and the remaining 19 were analysed based on their level of impact and then later based on their relevance within an Australian context. This analysis identified articles by Shafer and Tebb 6-8 as having the potential to be applied in the Hunter New England Local Health District of New South Wales (HNELHD).10 Efforts to translate this intervention into primary care general practices (GP) took place between March 2012- Dec 2012 with Dr. Tebb consulting on the translation methods and materials. Implementation efforts are scheduled to begin June 2012.

The purpose of this proposal is to support this collaborative international partnership to translate, implement and evaluate our chlamydia screening intervention in the HNELHD setting in order to address this major public health epidemic. General practice is considered an ideal setting for population based chlamydia prevention initiatives given its reach to the youth population; with 80-85% of young women, and around 65% of young men visiting a GP annually.11,12 GP is also considered an acceptable and feasible setting in which to provide routine and sustainable chlamydia care to young people at the population level.13

Plan: to tanslate and evaluate our successful Chlamydia screening intervention into 17 GPs in HNELD region. The aims of the practice-based phase are to develop policies, systems and processes to support routine chlamydia screening for all sexually active 15-24 year old patients, and to establish a quality improvement cycle to continuously improve the proportion of at-risk females 15-14 years of age who are screened for chlamydia. Selection criteria for the 17 GP sites were: one major urban area from each of the three clusters that has the highest youth population (15-24 years); 50% of the GP practices in each of the three urban areas with the highest number of active young patients; capacity, and willingness to take part in the project. The practice sites are as follows: 9 from Greater Taree, 7 from Cessnock and 1 from Muswellbrook. The approach will modify the tool kit and strategy used in the original HMO intervention so it is applicable to this new setting. It will also develop step by step information to support project translation and replication efforts. We will evaluate pre-post differences in the proportion screened adjusting for repeated measures and GP region.

Criteria and Metrics For Success A pre-post-test clinical audit intervention conducted with general practices(GP) across three primary care divisions within the HNELHD in 2008. Suggested that providing chlamydia care to over 70% of young people who frequent the GP setting is achievable, as measured by GP self-report, however the follow-up needs assessment found that less than half of the GPs surveyed continued to routinely offer chlamydia screening to their 15-25 year old sexually active patients.14 It has been shown that 50-70% of young people aged 15-24 years need to be tested and treated for chlamydia to see reductions in the incidence at the population level.15 We will use this criteria as the primary metric of success. Statistical analyses will compare pre-post changes in the proportion of adolescents screened for CT across each of the GP practices that implement the intervention. A process evaluation will be conducted to evaluate barriers and facilitators to identifying and screening asymptomatic, sexually active adolescents and young adults for chlamydia.

Cost and Budget Justification: The estimated cost for this proposal is $97,800 to support PI (Dr. Tebb) at 30% effort, Co-Investigator (Dr. Shafer) at 10% effort; Statistician (Dr. Nehaus) at 5%; 2 part-time local data collectors $30K and 1 round trip travel for PI at $3,000 and minimal supplies at $1,000.

Collaborators: Hunter New England Health (HNEH) provides care for approximately 840,000 people and covers a geographical area of over 130,000 square kilometres (including Hunter Region, the New England Region and the Lower Mid North Coast local government areas of Gloucester, Greater Taree City and Great Lakes). The Chlamydia Prevention Quality Improvement project is a collaborative project with the Hunter Rural Division of General Practice

 

References:

  1. Centers for Disease Control and Prevention. Sexually Transmitted Disease Surveillance, 2009. Atlanta, GA: Centers for Disease Control and Prevention. Division of STD Prevention National Center for HIV, STD and TB Prevention; 2010.
  2. Chesson HW, Pinkerton SD. Sexually transmitted diseases and the increased risk for HIV transmission: implications for cost-effectiveness analyses of sexually transmitted disease prevention interventions. J Acquir Immune Defic Syndr. 2000;24(1):48-56.
  3. Hu D, Hook EW III, Goldie SJ. Screening for Chlamydia trachomatis in women 15 to 29 years of age: a cost-effectiveness analysis. Ann Intern Med. 2004;141(7):501-513.
  4. US Preventive Services Task Force. Screening for chlamydial infection: US Preventive Services Task Force recommendation statement. Ann Intern Med. 2007;147(2):128-134.
  5. National Committee for Quality Assurance. The State of Health Care Quality: 2010. Washington, DC: National Committee for Quality Assurance; 2010.
  6. Shafer MA, Tebb KP, Pantell RH; et al. Effect of a clinical practice improvement intervention on chlamydial screening among adolescent girls. JAMA. 2002;288(22):2846-2852.
  7. Tebb K. P., Pantell R. H., Wibbelsman C. J., Neuhaus J. M., Tipton A. C., Pecson S. C., Pai-Dhungat M., Ko T. H. & Shafer A. B. (2005) Screening sexually active adolescents for chlamydia trachomatis: What about the boys?, American Journal of Public Health, 95 (10): 1806-10.
  8. Tebb K, Wibblesman C, Ko T, Neuhaus J, Shafer MA (2011) Translating and Sustaining a Chlamydial Screening Intervention 4 Years Later. Arch Intern Med, 171(19):1767-1768.
  9. Centre for Epidemiology and Research. 2009 Report on Adult Health from the New South Wales Population Health Survey. Sydney: NSW Department of Health, 2010.
  10. Guy R.J., Ali H., Liu B., Poznanski S., Ward J., Donovan B., Kaldor J. & Hocking J. (2011) Efficacy of interventions to increase the uptake of chlamydia testing in primary care: a systematic review, BMC Infectious Diseases, 11.
  11. Centre for Epidemiology and Research. 2009 Report on Adult Health from the New South Wales Population Health Survey. Sydney: NSW Department of Health, 2010.
  12. Kong, F. Y. S., Guy, R. J., Hocking, J. S., Merritt, T., Pirotta, M. Heal, C., Bergeri, I., Donovan, B. and Hellard, M. E. (2011) Medical Journal of Australia; 194(5): 249–252
  13. Donovan, B., Bodsworth, N. J., Rohrsheim, R., McNulty, A., and Tapsall, J. W. (2001) Characteristics of homosexually-active men with gonorrhoea during an epidemic in Sydney, Australia. International Journal of STD and AIDS, 12:437-443
  14. Chlamydia care in general practice.  Draft report. Hunter New England Population Health. 2010
  15. Frileux, S., Sastre, M.T.M, Mullet, E. & Sorum, P.C. (2004) The impact of the preventive medical message on intention to change behaviour, Patient Education & Counselling, 52:79-88

Commenting is closed.

Video Series to Support Translational Research and Development

Type: 
Proposal Status: 

The Challenging Path from Bench to Bedside
Although basic and clinical scientists have long collaborated, translational research challenges investigators to move beyond the traditional training of both laboratory scientists and clinicians. The delivery of effective clinical solutions involve the integration of science, technology, intellectual property, market analysis, product development, clinical, regulatory and reimbursement strategy, and marketing. These are clearly very different disciplines and functions, practiced by professionals with very different backgrounds and experiences. Nevertheless, early investigators benefit tremendously from an appreciation for how these various factors affect the likelihood that an innovation will successfully lead to clinical implementation. In the T1 Catalyst Program at UCSF CTSI, we work closely with investigators to identify and support key innovations that are likely candidates for translation. Common concerns and questions about the many challenges in translating basic research toward clinical practice are routinely discussed.

A Video-Based Initiative to Trigger Early Engagement and Collaboration
We propose the production and targeted distribution of a series of brief (3-5 minutes) professional videos to broaden the dissemination of this information, and highlight UCSF and external resources available to support investigators tackle these challenges. The videos will illustrate a number of case studies through engaging narratives of the experiences and challenges faced by investigators, administrators, and business professionals at UCSF and private organizations. The impact of each case study will be further bolstered by highlighting the stories of the ultimate beneficiaries of successful translational research - patients.

This pilot project will focus on five key topics: needs assessment, intellectual property, strategic partnerships, getting to first-in-human clinical trials, and regulatory strategy. These videos will not provide in-depth analyses of each subject, but be a catalyst for viewers to assess their own needs, discover available resources, seek further information or assistance, and share their thoughts and ideas. Within UCSF, we will also highlight the expertise at the Office of Technology Management (OTM), the Office of Innovation, Technology and Alliances (ITA), the Institute of Quantitative Biosciences (QB3), and others to provide a more complete view of the resources available to investigators. Our objectives are: (i) to inform interested UCSF investigators of the key challenges to translational research, (ii) to persuade them that UCSF (and its partners) can support them to be successful; and (iii) to inform external organizations that UCSF is a valuable partner for translational research and development.

Leveraging Storytelling, Consumer-Generated Content and Social Networking
To maximize reach and impact, the videos will tell a number of stories, that weave together the many facets of translational research. These will not be instructional videos. They may include dramatized re-enactments, and character development that convey the many, often opposing, objectives of stakeholders. The videos will be hosted and promoted through traditional and non-traditional distribution channels that not only target public and private investigators, but their research assistants, and supporting administrators. This wider audience will be encouraged to participate in the narrative by providing feedback or sharing videos of their own experiences and concerns. These discussions will be further promoted to increase awareness and a sense of community and shared interest.

We will also coordinate promotion with other UCSF and UC communication channels (i.e. UCTV, etc.), as well as several external business and R&D groups. Distribution efforts may also include targeted email marketing, other CTSA network communication channels, outreach to relevant bloggers, etc. The outreach will be accompanied by viewer analytics to improve and customize future projects and campaigns.

Assessing Awareness and Long-Term Engagement
In the short term, we will measure the success of our project through viewer surveys. These surveys will be carefully designed to elicit information about the viewer's understanding of why and how translational research occurs, their interest in learning more, and ongoing concerns. In the long term, we will use the results from our outreach-based analytics, and measures of increased interest from UCSF and non-UCSF investigators and business professionals, to assess the impact of this project (e.g. measured through the number and quality of applications to the T1 Catalyst Award Program).

An Experienced and Multi-Disciplinary Team
The project will be a collaborative effort between CTSI's Communications team and Early Translational Research (ETR) program. John Daigre and Katja Reuter (CTSI Communications) are experienced multimedia and social media professionals who will serve as advisors throughout the planning, production, and editing process, and take a leading role in the promotion phase. ITA, OTM, QB3 and other key UCSF-based institutions will also be major contributors to the project. An external video production company will be hired to help develop story lines, and produce and edit the videos. Ruben Rathnasingham (ETR), who has helped develop a number of university-based innovations into clinical products, will manage the project.

Total Budget: $40,000 

This pilot project will take 6 months to complete with a budget of $40,000 for planning, production, editing, and promotion. 

Comments

This sounds very interesting. Would love to talk with you about creating an educational video discussing the process of translating research into policies/law and the challenges. One question: would this resource be available to non-UCSF researchers or is the goal to promote UCSF and its partners?

Hi Dennis, Would love to talk more. I think the inclusion of policy and legislation to the discussion is crucial. Also the concept of aligning investigator incentives with the objectives of translation would be valuable. This resource would definitely be available to non-UCSF researchers and business professionals. One of our primary goals is to promote strategic partnerships. We have found that industry would love to be a part of the conversation.

With such a wide array of potential topics to cover, how will video topics be selected and prioritized during the pilot period?

Katie, this is a great point and our approach will be to target the most common challenges for early investigators first: needs finding, IP, fundraising, partnerships and regulatory strategy. This will enable us to assess the impact of this communication strategy and follow up with other topics, if successful.

Excellent idea - even though I'm often skeptical about the cost/benefit for video production. I think the focus on early translational research (challenges, gaps, etc) is one thats particularly well positioned to benefit from short videos. I think you should think a little more carefully about distribution - think about when and where your customers might be most likely to stumble across these. And, for assessing impact, think about how you might tie these videos to the number and quality of proposals you see in T1 Catalyst or other (e.g. QB3) submissions. Speaking of QB3, certainly coordinate with them.

Great points Mini, and as John points out, these are key issues that we recognize to be critical to the success of the project. We will start with the videos themselves - focusing on a compelling narrative as opposed to a series of interviews. They will be interactive and may offer collaborative challenges. A lot of effort is planned for the pre-production/design phase of the project. It will not only include the CTSI Communications team and an experienced media production company, but the folks at QB3, ITA, OTM and others. Everyone is excited to contribute. Capturing the attention of research investigators has always been a challenge. We hope to address it by identifying traditional and non-traditional distribution channels that not only target investigators, but their students and research assistants, and supporting administrators. To maximize engagement, we will focus our efforts on developing narratives that highlight the importance of a collaborative and multi-disciplinary approach and the immense resources available at UCSF to help investigators succeed.

I agree that this is a good idea, and look forward to supporting this project from a communications perspective. This would be a valuable tool in efforts to convey our message(s) to researchers, industry, and graduate students (all among the target audience of this project), and others. Ruben and I agree that there are challenges regarding getting the videos in front of the right people, and that we will need to get creative regarding how it’s produced and distributed. The latter will include collaborating with other groups on campus, the CTSA communicators network, etc. I also expect Social Media, along with resources such as UCTV, to play a key role in making these available to a wider audience, and a customized marketing plan will be developed with these and other resources in mind. I’m confident that the timeline and budget are realistic, and that this effort would integrate with CTSI’s overall communications efforts, as well as aims to increase awareness and use of services. A key element of this project will include measuring success (including linking to number of T1 applicants, as noted in an earlier comment, and other metrics).

Commenting is closed.

Translating Research into Law and Policy

Type: 
Proposal Status: 

Rationale: Currently, there is no direct pathway for health sciences research to move from scholarly publications into policy and law. Although some evidence is cherry-picked by advocates and policymakers, legislation often reflects rhetoric rather than evidence. While some translational researchers study the effects of laws or write white papers summarizing the state of the evidence, we know of none who take the approach of working with investigators to leverage their research and write model legislation. Here we propose the development of a replicable process that will translate research directly into evidence-based model policies, regulations, and laws that improve health and/or health outcomes. Model policies, regulations, and legislation are proposed pieces of law drafted by researchers working with legal scholars to incorporate the latest scientific evidence with the goal of improving health and/or health outcomes. The drafted language would then be made freely and widely available to legislators, advocates, and others involved in the enactment of law, regulations, and policy.  

Plan:

1. Kick-off Meeting: March 30, 2012 to introduce the project and workshop additional ideas for a pilot project.

2. Pilot projects: This grant will fund one-two demonstration/pilot projects that bring together UC Hastings faculty, UCSF researchers, and students from both institutions to write evidence-based model legislation, regulations, and/or policies. This group will analyze both the current scientific evidence and the legal frameworks related to a health issue being investigated by a UCSF researcher; the group will then write model legislation that revises or changes current laws or that proposes new laws in order to improve health and health outcomes. One pilot project we are pursuing is on the toxic effects of sugar on health based upon Dr. Lustig's research (PDF attached). We are considering  a second pilot project. Current ideas include mandatory reporting laws in the emergency medicine setting, the effects of hospital and ED closure, the efficacy of orthopedic procedures, restrictions on activities of individuals with epilepsy, and implementation issues realted to the Affordable Care Act.

 

Criteria and Metrics for Success: The broad goal of this demonstration project is a proof of concept that then allows for replication and dissemination of this new approach for translating research directly into legislation:

Products:

1. Creation of model legislation and/or regulations resulting from biweekly meetings of the UCSF investigators, UC Hastings faculty, and students from both institutions and the dissemination of the work-product . Timeline: Spring 2012-spring 2013.

Documentation & Dissemination:

2. Document year-long process via a detailed white paper. 

3. Host 4 calls/virtual meetings to discuss and publicize model to organizations and institutions that would replicate this model.  This would occur via the CTSA Health Policy network. Timeline: Winter 2012-summer 2013.

Sustainability:

4. Create a student group focused on the legislative process and the translation of research into the policy, regulatory, and legal realm.  Recruit 15-25 interested students to participate. Timeline: To start in spring 2012.

5. Identify internal and external funding options to sustain and expand the project. Apply for at least 2 grants during the pilot period. before grant period is up.  

Total Budget: $38,506

Salary support for UCSF faculty, Co-PIs and research assistants involved in developing the pilot projects, the Program Analyst to coordinate activities of this project, consultants, project supplies and communication.

Collaborators:

1. UC Hastings: Jamie King, Jennifer Dunn, David Faigman, Sarah Hooper

2. UCSF: Dan Dohan (co-PI), Rob Lustig, Dennis Hsieh (co-PI), Richard Barnes, Jessaca Machado

Comments

This is a challenging and worthwhile endeavor. Investigators benefit from understanding how current policies come about and how their research can be supported (or not) by future policy and law. I think presenting specific case studies would be a tremendous benefit. Would love to help brainstorm. - Ruben.

Hi Ruben - thanks so much for offering to brainstorm! Is there a good time this week to reach you? Thanks. Dennis

Dennis - sounds great - I'll connect with you via email.

Dennis, et al: This looks very good and presents some interesting ideas and challenges. I agree with the comments that taking it through the legislative process is very long and messy. However, it might be beneficial and educational to try to engage a legislative aide in the process so that learners can get some information about how best to take it to the next steps. For the medical students, I think it would be important to possibly work with one pathway, most likely advocacy, and try to engage them at an early stage in their medical education. It is important for the first topic to serve as a pilot and remain somewhat focussed so that it can be successful. Good work on putting this together. Bill Shore, MD

Thank you for your comment. I think for the course part we would like to have a legislative aide or someone familiar with the process engage with learners for several reasons. I think it would be good for learners to understand the legislative process. Additionally, I think it would make sense for the learners to have a good understanding of the process as I think it would help inform how the model legislation would ultimately drafted. Agreed that the first pilot should be focused in order for it to be successful. Also, I think your email about speaking with Dr. Bodenheimer or Dr. Grumbach about a second pilot around primary care medical homes is a good one and one I will work with the team to explore. Dennis

You have come such a far way with this grant and it looks great! I think it is a very novel idea, has the potential for great impact and it appears you have a well thought out of plan on how to proceed and how you will measure your success with the program. My one suggestion and I'm not sure if the proposal is the place to improve it, but of giving at least 1 concrete example of an issue you will tackle (ex. food disparities and policy, obesity and policy, smoking cessation etc.) I think this could help focus people on what exactly you mean by turning research into policy. Again - it looks phenomenal! Best of Luck!

Definitely - just added in specifics on one of our pilots - sugar in foods and its effects on health. We are looking for a second pilot to run concurrently so if you have any suggestions on those who might be interested, please have them post here / contact me! Thanks. Dennis

There seems to be growing interest in trying to centralize published research around similar topics and, further, aggregate expert / researcher commentary on these results, in order to avoid "cherry-picking" and "rhetoric based" decisions (very well put!). Another proposal on this site, "Encouraging research collaboration and integrity by optimizing post-publication peer review for researcher participation", is tackling this same problem, but in regard to influencing research publishing and funding organizations, as opposed to law and policy makers. Is seems like there could be a lot of synergy between our two groups.

Definitely - I love your idea. I think it provides a public forum for discussion on articles and figures that is missing. Similar to news articles and other forums, having the ability to discuss outside of journal club/conferences but across institutions and across the field I think is a great way to critically evaluate papers. Additionally, I think having these comments available for those reading the paper would be helpful who may not have as much experience (ie students/junior investigators) or those who do not have the benefits of discussing the paper in a conference/at journal club. I think one challenge is figuring out how to organize the comment threads so that the comments themselves can be easy to browse through and can serve as a resource as well. Would love to chat more if you have time the next few weeks. I could see us working together to figure out how to gather and evaluate the evidence and then use for a variety of purposes. -Dennis

Great seeing your detailed project deliverables! Minor comment is to see a tentative timeline attached to each of the metrics. Its an ambitious set of tasks - I might tier them into primary and secondary objectives. In addition to the paid staffer - what sort of individuals, skills, involvement do you imagine among the core team members. Thanks so much for fulfilling such a critical need/ gap in translating research from UCSF into meaningful policy.

Timeline makes sense. Will update later. Like the idea of tier-ing objectives as well - will give it some thought. Will get back to you in terms of individuals/skills/involvement.

I think this is an excellent idea. Having worked with activists/advocates in several situations I've often been concerned with their lack of attention to the evidence related to the area in which they were interested. At the same time their cause may be greatly strengthened by having an evidence base for their advocacy. My suggetsion would be to add some advocates/activists to the mix in the meetings you plan to have. Involving journalists would also be good.

That's a great suggestion - we will definitely look for some advocates/activists depending on the pilot projects we proceed but I think it also makes sense to come up with a process/infrastructure of how to involve them. Do you think journalists would be useful in the development phase of model policies/legislation or more for helping pass the product or both? Also, do you know of any journalists who might be interested?

This project looks like it's off to a good start. I'm curious about how the process works in choosing which health topics to legislate/research. Is a topic chosen by panel? Super majority? Moreover, given the length it takes to draft, submit, have a piece of legislation pass committee, house, senate, etc., how will this program be sustained? Particularly with a rotating set of students (both at UCSF and UCH), is one year sufficient to get this running? Since a metric of success is hosting a year-end symposium on a piece of "model legislation," who weighs in on the final piece?

I think in terms of the pilot project topics, we have selected one already - sugar - and are looking for a second topic that will be decided upon after discussion with those who are interested as well as the entire team. The goal is to have model language for legislation drafted and feedback incorporated. It would then be made available to advocates, legislators, and those who may be interested in taking the legislation and running with it. So I think the time frame when it comes to actually passing legislation is not as big of a concern for the program itself. In terms of longer term, I think Dr. Dohan's comment gets at this - the determination would be made by those at a center and be sustained through a center housed under the UCSF/UC Hastings Consortium. I think students will be an integral part but the center and key faculty members with rotating student leadership will help provide continuity. I think in terms of the final piece of model legislation, it would be nice to have a panel of individuals - including researchers, policymakers, advocates, and legislators weigh in and help provide comments in addition to all those attending the symposium.

Hi Dennis, As an economics PhD interested in helping to fight obesity and chronic disease burden in America, I applaud this proposal and sincerely root for its success. I would strongly suggest that your center includes at least 1 or 2 people with rigorous economics training to provide pragmatic advice about appropriate policy, especially in terms of taxation and quantity restrictions. For example, Lustig has proposed a tax on added sugar. It is important that tax proposals like this be analyzed critically by someone who understands both intended and unintended consequences of new policy. I am happy to recommend contacts in your search for economics-trained faculty/researchers. Best, Christina J.

Awesome idea. I recall how much [wasted] time we spent in law school legislative advocacy clinics trying to find and analyze alternative policy frameworks in use in other states. Working from the bottom up and generating policy ideas based on facts/reality/modern research is clearly superior. The big thing I would say is that you should resist any efforts to push this all the way to actual legislation. You should stay firmly on the side of putting model legislation together, something which Hastings and UCSF can probably put some serious talent together for, and leave the actual lobbying to activist groups that have expertise in that area (the skill sets for developing model legislation and for lobbying for legislative change are largely non-overlapping). The legislative process is slow and messy, and it's almost certainly not worthwhile for you to get directly involved. Further, if your success metric ever ends up becoming a question of how frequently you successfully foment positive policy change, you're probably doomed to look unsuccessful. The legislative process is too messy to result in many clear wins. I imagine a good first step would be to reach out to existing activist groups and figure out what the model legislation you deliver should look like in order to meet their needs. You'll probably want to develop a package of documents for each model piece that you work up. I'm not sure exactly what should go in it, but probably (1) some actual model language, (2) a legal appendix explaining why relevant regulatory authority exists (e.g. for simple sugar content in foodstuffs), (3) a similar appendix citing and summarizing relevant research papers used as the factual basis for the model, and (4) a list of alternatives or trade offs that could be considered by different groups pushing for similar change in different states. I think it's an excellent idea and I'll post again if I have more comments.

Thanks for your comment! Our focus is really on taking and analyzing the scientific data/evidence and then working on coming up with actual language for model legislation and policies. We will not be involved in the actual passage of legislation and will partner with advocates, community groups, and legislative staff on that piece. I complete agree with you that writing model legislation and passing model legislation require two separate skill sets. As we know, right now much legislation is written and passed by the same group and this is why the evidence gets left out. Thus, I think the proposal here tries to get at the division of labor you outlined and allows expertise to be applied properly. I think your suggestion in terms of reaching out to legislators, advocates, and other groups who actually pass legislation to figure out what format model legislation should be in is a great idea. We definitely want to make our end-product as user friendly and as easily adoptable as possible. I think in terms of creating not just model language but also the citations to both the legal and scientific framework makes sense - it is work we will be doing already to draft and will make it easier to justify the model legislation as drafted. I also like the idea of alternatives/tradeoffs as these pros/cons will be discussed by our core group. This way the legislation can be easily modified by an audience looking to adopt different versions of the draft language depending on the specific political considerations in their state. Thank you so much for the thoughtful comments! -Dennis

Dennis, Believe this addresses a real need in the public dialogue - opportunity for data driven, thoughtful discourse, with intent to serve a greater good for the broader collective. One strength of the peer review publishing process is an opportunity for critical feedback on the submitted manuscript. Anticipating that your proposed topic - e.g. toxicity of sugar - will be politically charged, both by the proponents in your initiative as well as those affected by your drafted legislation, wondering how you will incorporate divergent views into your process of topic selection, data review and synthesis and drafting of the legislation? The capacity to invite such views, in turn the creative holding of divergent views, would both strengthen dialogue and add legitamcy to the process and the output. Certainly starting with the evidence, the data or the "facts", is right. Yet annual research conferences are predicated on the fact that researchers looking at the same data, facts and evidence will have energetic discourse leading to divergent conclusions. We know in the political arena this will also be true. Interested in your thoughts of how to harness and direct this to the benefit of your worthy proposal. Jeff Critchfield

I think one underlying principle that will help guide the discussions is the goal of improving health and health outcomes. I understand that there are many interests in areas such as sugar which may lead to various viewpoints, especially when money is involved, but the overall mission/goal for us is to improve health and health outcomes which will provide a lens for the discussion. However, I recognize that even within this area, there will be divergent viewpoints. I agree that incorporating and discussing these views will both strengthen the dialogue as well as add legitimacy to the process and output. To this, we think there will be various forums to encourage and allow for this discussion: The course will allow the students and faculty to have a rich discussion. During the seminars we are hosting, we also hope to devote a portion to discussion. Additionally, the symposium will provide another forum for discussion. The calls we intend to have will also generate discussion. Finally, the website will allow for comments and discussion on the work as well.

It is heartening to see this proposal attract interest and develop. We wanted to share some further thoughts that we've been working on but that didn't neatly fit in to the original posted proposal. We'll put them into separate notes to make it easier to track and comment. As the UCSF/UC Hastings Consortium continues to develop, we're really interested in opportunities to bring faculty and learners from the two campuses together. In addition to the educational/training activities listed above, we see potential for this proposal to initiate a sustainable curriculum and practicum related to legislative translation. A first step is for funding from this grant to support UC Hastings and UCSF faculty to develop and co-teach a Law and Policy course. This will bring learners from both campuses together to work on projects. Two collaborators on this proposal -- Richard Barnes (based at the UCSF Center for Tobacco Control Research and Education) and Jaime King (an associate director of the Consortium) -- are well suited to take this on.

Comment # 2: An earlier commentator asked about how projects would be selected, which is an issue we have discussed at some length. For this proposal to birth a sustainable Center, establishing a pipeline of research issues that are "translatable" is crucial. Rob Lustig's work is an obvious candidate and we have good momentum around that. Other UCSF research with public health implications -- such as Kirsten Bibbins-Domingo's work on salt intake or Laura Schmidt's work with the San Francisco Department of Public Health on harm-reduction related to alcohol consumption -- are other candidates in this realm. A second realm to develop could be model legislation related to the Affordable Care Act (ACA). What does health service and health policy research tell us about legislation to implement various parts of the ACA? This is not necessarily as important a need at the federal level as it might be for states. It might even be something that could lead to helping smaller entities such as health departments or hospital districts in developing administrative policies that are consistent with ACA. The National Association of Community Health Centers-- http://www.nachc.com/ -- provides this kind of support for their constituency, but obviously they bring a particular agenda. Andy Bindman is actively involved in ACA work in DC and also has great insights into the situation in California via his work with the California Medicaid Research Institute (CaMRI), so this is another possible avenue to develop. In addition to public health and ACA, a third translational realm might be related to innovation and discovery -- Law and Policy at the T1 translational level. There was a great symposium at UC Hastings last week highlighting innovative Industry/Academic Collaborations (you can see it at http://bit.ly/yZXT90 -- there is a live stream available of the event). UCSF's Erik Lium was one of the speakers. The conference highlighted how industry and academe are finding ways to partner but also touched on a number of challenges such partnerships face. We believe this offers a third potential "pipeline" for translatable research as it might engage UCSF researchers who do T1 work, those who incubate T1 translation via industry partnerships, or those who have expertise related to research integrity.

I teach policymaking and political advocacy at both UCSF (the CP 151.03 referenced in the Plan) and UC Hastings. I take a practical approach to real policymaking in the real world. I have been involved in discussions at Hastings for the UCSF/UC Hastings Consortium on Law, Science, and Health Policy to propose a policy advocacy clinic to engage faculty and students from both schools to work on actually turning health science into health policy by working with policymakers. What we propose here is one of the first steps that such a policy advocacy clinic would do, which is to research the current state of the law affecting the health issue, if any, and then draft the policy instrument (statute or regulation) to fix the problem identified by health scientists. It makes a lot of sense to start with what this proposal embodies as a kind of pilot project for the broader advocacy clinic. Prof. Richard L. Barnes

VERY cool, Dennis! What a novel and important idea, and the feedback you've gotten is also very valuable. In terms of your looking for pilot topic #2, there are a ton of ideas within the realm of emergency care that I can think of already that are in need of legislation that are backed up by clinical/health policy research. One example would be (although I'm not sure if this is what you mean) the literature on how alcohol impairs the mind and its relationship with driving. One potential policy implication/legislation could be, for example, mandatory reporting in the ED. Currently, as healthcare providers, there are some federal and state reporting mandates for certain conditions/diagnoses - e.g., child abuse, elder abuse, domestic violence, public-health related diseases such as HIV, Hepatitis B, syphilis, etc. We also are required to report patients to the DMV (Department of Motor Vehicles) if we believe they have a condition that impairs their driving ability. As far as I know, there is NOT a mandatory reporting requirement for drunk driving. Of course, if a policeman/highway patrol officer sees someone drunk and pulls them over, they can be cited by the authorities for a misdemeanor (or felony if on federal property). But thousands of times a day across the country, people arrive in the ED because the ambulance took them in after an accident, and they are intoxicated. And in those cases (where they do not arrive by police), we just treat them and they leave. Of course, we WANT to treat their injuries. But perhaps there is also a case for mandatory reporting. It would require understanding the basis on which current reporting exists and applying that same rationale to reporting drunk driving in a healthcare setting. Another example in a different arena could be the application of the knowledge that ED closures occur more often in vulnerable communities. What is the proper legislative, if any, response? It's tricky since there are a lot of market forces in this area, and that is how our healthcare system is currently structured. Is there a role for regulation in this area? New York and New Jersey already have commissions (the Berger Commission and Reinhardt Commissions, respectively) that are set up to evaluate proposed closures, but is this a place where legislation would be helpful or desirable? The legislation behind this could go beyond simple regulation but perhaps by managing incentives of insurers and of course very politicized and complicated. But just another thought. - Renee Hsia

Thanks so much for your comments - I think both areas would make for very interesting experiences for students and faculty working on model legislation. 1. Mandatory reporting makes sense from a public policy standpoint. I could definitely see a counterargument being made in terms of deterring people from using the ED - but this is exactly the type of issue that makes it interesting, especially when students are involved. I'm not super familiar with the literature on impairment - but I'm sure that the evidence is not very controversial. However, the implications of reporting and what happens to those seeking care would make for a very interesting piece of policy. 2. In terms of ED closures, I know you have done some work and this is an area that is near and dear to my heart - I think that given there are stringent requirements for hospital openings, a parallel process should be available for closures. The Berger Commission was pretty arbitrary, if I remember correctly, from having worked in NY fighting a few of those closures, but I think working from the evidence of the detriment of the closure of EDs and hospitals to what the process should look like would result in a much more rational process. ----------------------------------------------------------- Dr. Hsia's response: Yes, there's definitely lots of counterarguments for each of them and it's a field that isn't clear cut, but definitely should be more informed by the evidence that we've amassed over the years as researchers and clinicians. I'd be happy to work on either of them - you're the expert and I can only provide what I know in terms of the data - but is certainly a very novel and unique way to start addressing some of these gaps. In a sense, it's a new kind of implementation science! - Renee Hsia

Dennis, this is very creative! Thanks for sharing. It is also a creative way of developing pilot studies. I read a few of the others and the comments. We fund research pilot studies within Columbia University School of Nursing as well and I like this idea for gaining feedback. In terms of the proposal itself; I was thinking that it might be fun once this gets going (assuming it is funded) to build in an evaluation piece for the adoption of model laws. I agree with the comment that advised not to get involved in implementation of the models themselves; but it would be interesting to see if any of them "take off" so to speak, get picked up and adopted and whether they accomplish what was intended. This, of course, goes far beyond what you are proposing; but thoughts for your next grant! Good luck on this. - Bobbie Berkowitz

This is an enormously important endeavor. As a health services researcher who spent a year working at the White House Office of Management & Budget, I certainly observed the underuse and misuse of evidence in the policy-making process. One other potential area of focus for your proposal might be regulatory writing and review. For example, legislation sets the broad principles for a given healthcare function. The way in which regulations are written by federal and state agencies, however, often determine the effectiveness of the law. In my work at OMB, I observed that the very best scientific evidence was not always used or brought to bear in the regulatory writing and review process. Just a thought and suggestion. Good luck. Excellent project.

Thank you for the comment. Regulations are a key part of the process, especially when flushing out legislation, and will definitely be included in our work.

Hi Dennis! I think this is a fantastic idea! I have wanted to get something similar to this off the ground for a long time. Having used health services research to demonstrate significant flaws in our informed consent laws as part of my dissertation and shepherding that project all the way through the Washington state legislature, I am very eager to continue translating scientific and health services research into meaningful policies. I am really thrilled to be a part of this and to incorporate it into a class for both UCSF and UC Hastings learners. Wonderful idea!

I think this is a wonderful idea. In my experience researchers are very pleased when their work leads to policy change You are creating a framework that will make that much more likely to happen. Mitch Katz

Hi Dennis, Proposal looks awesome. The only thought I have is re: levels of evidence (e.g. cohort study, RCT, meta-analysis) and whether any assessment of this should be incorporated into the process. Obviously, a meta-analysis showing that A is associated with B is a much more robust finding than a secondary analysis of a single data set that shows the same association. That being said, many policy & legislation relevant research questions are difficult to study using "gold standard" study designs (e.g. you can't randomize subjects to housing vs. homelessness.) This is probably too subtle for the brief proposal you're submitting, but something that crossed my mind.

Dan and Dennis, I am interested in participating in the demonstration project. The number of patients who may potentially benefit from Hip/knee replacement and spine surgery are increasing with the median age of the US population. Health policy colleagues have provided evidence that innovations in our field have increased procedure costs without producing notable improvements in efficacy. My provider colleagues need your help to better understand what data is needed as payers and perhaps legislation begins to limit patients' access to the potential benefits of these procedures.

Definitely - will reach out to you offline to see if we can set up a time to discuss! Thank you for your interest - I think this area is very important, especially with the balancing of costs and benefit reaching the forefront of the healthcare debate.

Dennis, this looks great! I like that you have included education and dissemination into your project so that we can all become more aware of how to translate research into legislation. For the kick-off event, it might be helpful to have more details regarding the logistics of getting involved for faculty and students. For example, will people sign up for this as a course or as an interest group? Are there different levels of involvement? What will the time commitment look like? I look forward to hearing about your progress!

Thank you for your comments. I think we are beginning to publicize the kick-off event this week and will hopefully reach out over email, through this forum, physical posters, as well as word of mouth. Longer term, I think creating a student interest group around these issues is a great idea. We are planning on having a course but I think a translational interest group would be a good way to engage students and could allow for different levels of involvement from a student perspective.

Dennis and Dan, very nice. Possible topic would be related to mandatory reporting of child abuse. Most states require this but lack specific criteria of known injuries that are well correlated with abuse (such as bruises in neonates, retinal hemorrhages, etc.). The suggestion would be that existing laws become more specific as to those injuries/findings which require reporting. I also like Renee's suggestion about mandatory reporting of drunk drivers. The concern there is that such a law (as might also be true of my suggestion) might discourage injured drivers from seeking help, or prevent parents from bringing children in for care. Alan Gelb

I agree with the many positive sentiments about this proposal - innovative, generative and cross-disciplinary, and with the potential for substantial impact. Epilepsy is another area worth considering - issues related to determination of driving privileges, job discrimination, provision of care in the school setting, etc.

Thank you for the suggestion - would love to chat with you about the possibilities as the process moves forward. Will be in touch offline! -Dennis

This is a great proposal to bring together UCSF and UC-Hastings faculty, staff and students which could spawn additional collaborations. I really like the team/shared leadership and implementation. In Item #2, "participate in 3-4 classes", I hope you find a way to give a seminar or two in one or more of the CTSI Implementation Science courses in the Training in Clinical Research Program--such as EPI 249--Translating Evidence into Policy or the Intro Course. One area I would like to be made a little more explicit is how you envision this program will be self-sustaining after the proof-of-concept funding is completed? Would it be funded through a Center with donor/foundation type support, are there industries or other non-profits that might support this type of work? In terms of evaluation, you list mostly process measures (which seem appropriate at this stage). But what are some of the key outcome measures that might be used to support the impact of your group's work?

Thank you for the comments. We would be delighted to participate in the CTSI Implementation Science courses in the Training in Clinical Program, such as Epi 249. With the development of a course over the grant period, I think a certain level could be self-sustained through. However, to expand the project on a larger scale, we would seek institutional support from UCSF and UC Hastings as well as looking for other types of grants and philanthropic funding would be the immediate next steps. RWJ may be one source of funding. Furthermore, in the long term, we hope for funding for translation to be part of the funding interested researchers receive, and this could also help fund the process. Overall, however I think this is something we would work on during the course of the year. Any thoughts/suggestions you have in this area would be appreciated. I think having a successful symposium that draws legislators, legislative staff, advocates, as well as members of the UCSF and UC Hastings community is one key outcome measure to support the impact of our work. I think based on the discussions about the legislative process, in the short term I would like to stay away from judging whether is passed. I think in the longer term, it would be valid to see what kind of impact the model language that is produced is having and improve the process as appropriate.

This is a great idea and certainly would be a step in the right direction getting policy in line with evidence based research. Perhaps this is thinking too far ahead but what are your thoughts on disseminating information on policy changes to stakeholders affected. For example if policy changes impacted physician practices, how would you ensure that enough effort were made to educate physicians before imposing penalties on them. Is there existing legislation on due diligence for affected stakeholders?

Thank you for the comments. I am not sure about due diligence legislation for affected stakeholders. This is an issue we will have to investigate. I think your thought is a good one. I'm not sure if any penalties would be imposed - I think it would depend on what the specific issue is. For example, a piece of legislation could limit the sugar content of soft drinks. This would not necessarily be targeted as clinicians. However, I am curious how physicians and other health care professionals become aware of requirements. One mechanism I think of is when these requirements are tied to reimbursement. But will look more into this - definitely something we need to consider when drafting the language.

Dennis, Sarah and team: this is very creative and exciting. I've enjoyed reading the prior comments and wish you much luck. As a front line clinician and advocate, there are many instances I can recall where state legislation took time to catch up with current medical understanding (seat belts, vaccines requirements to attend school, etc). It may be challenging to develop a set of criteria by which to evaluate candidate issues to work on as this process matures.

Thank you for the comment - if you have any suggestions as to what potentially should be included as part of the criteria for selecting candidate issues, would love to hear your thoughts. I think some have been raised including the scientific quality of the evidence and whether a better mechanism for implementing change exists. Would love to chat more to hear your ideas!

Dennis, after having talked with you about your ideas for this program from early on, it is exciting to see it carefully worked out in detail. The plan has great potential both to accomplish important research-based reform and to provide a pedagogically rich experience for students across a number of disciplines. I have one thought, a question really. In a number of places, the proposal refers to creating evidence-based laws and policy, but then the concrete focus is on legislation. I think I probably urged you to have a narrow legislative focus, but as I think about it now, it seems that not all issues are necessarily susceptible of legislative solutions and that teams ought to have room to think of strategic options. In some cases, advocating for non-legislative reforms in regulations or policy may be more effective or efficient, so it might be worthwhile to be open to such options.

hi Dennis, I think the idea is really great concept. I believe others touched on the parts that gives me the greatest concern. The first is that not all research is at the right stage (has the right info) to be made into law yet. I think there usually needs to be a large RCT or several large observation studies before most there's enough power to support possible legislation. That being said, I agree that there are several things that are passed that don't have that much support because they are pets. I guess my point is that I would include a section that somehow screens your projects otherwise you subject yourself to the same arbitrariness that you are opposed to. Two, my understanding for model laws is that you write one law and other states, cities, govt bodies, etc use that as a template/base to write their laws. From my understanding, that works in some topics but that is not always the optimal strategy for dissemination of health policies. So here I'm wondering if you aren't limiting yourself by specifying model laws? Third, you might want to think more broadly who you want to include, especially from an ethics standpoint, so you should consider incorporating a bioethicist in your project. Hope this helps and good luck!

Thank you for the comments. I think what issues have scientifically rigorous evidence is part of the guidelines that the Center will have to consider when it is choosing future projects. Both you and Dr. Moskowitz raise this point, which I completely agree with. Additionally, both you and Professor Silk raise the point that model legislation/regulations are not necessarily the best way to achieve change or disseminate health policies. I think that this is true and depends on the specific health policy/issue involved. I think this is something again that will be discussed both in choosing the second pilot/future projects as well as through the process of the pilots. Model legislation/regulations can be very powerful. However, this does not mean they should be used in isolation. I think it is definitely worthwhile for the faculty learners to discuss other approaches. However, whether these other approaches are left to advocates and others or whether these approaches are developed in more detail will depend on the specific project and the relationship with the core goal of drafting model legislation/regulations. I think your last point in terms of including an ethicist in the process is a good one and we will definitely make sure that one is on the advisory/review board if not helping/consulting in a longitudinal manner.

I love the concept! While I understand that you will not be "promoting" or attempting to pass the drafted model legislation, I would still add as part of the proposal training, especially for doctors, something on how to testify effectively with respect to the evidence used to create the legislation. Doctors can be very powerful opinion swayers on health legislation, and doctors who can easily quote and make research evidence understandable to both the lay public and legislators are few and far between!

Thank you for the comment. I think this would be a good piece of the course that we are thinking about - it is an important component to the legislative process and falls under what Professor Silk, Eva Chang, and others have discussed in terms of other methods that may be an effective may to effectuate change. Of course, when testifying will be effective is dependent on the specific issue but I think this is something we will include in our course.

This sounds like a great idea. I would encourage you to look not only at legislation but also at administrative rules/interpretations. Much of the "law" in many health-related fields, like diet and nutrition, is generated at the level of administrative interpretations and administrative rule-making rather than at the level of direct legislation. Because of deference awarded to agencies because of their so-called "expertise" in interpreting and implementing laws within their respective fields, it is important, when those rules deviate from received scientific wisdom, for there to be clear statements from the scientific community in opposition to the agency's "interpretation" for legal challenges to those "interpretations" to succeed.

Commenting is closed.

Data Management for Research Community

Type: 
Proposal Status: 

Rational:

CTSI’s Consultation Services Data Management Unit (DMU) is just one of the successful research resources available on campus to help UCSF faculty and staff with his/her research.  Along with many other data management services, the DMU also provides consultation in data processing to help transform research data into a statistically analyzable format.  In our efforts to provide consultation to the UCSF research community, we have realized that some trainees and faculty do not have the programming skills or resources to perform data processing tasks that are necessary to keep their research moving forward.  Complicated data merges and transformations require specific programming expertise to ensure that it is performed correctly.  Data management programmers, who can devote more time and effort to small projects, are needed.  The DMU’s ability to provide these services is very limited.  As a result, we have identified a gap in services that the Data Management Unit is currently able to provide.   

All research requires some level of data management programming.  This expertise exists far and wide across campus. Regrettably, there is not an easy way to identify these experts.  Our goal is to form a community of data managers on campus.  The purpose of this community would be to 1.) identify specific data management programming expertise across campus, 2.) pool resources by sharing job descriptions, computer programs, workflow processes and standard operating procedures, and 3.) identify data managers who are able to provide services on a short-term, variable-effort basis.     

Data management programming is an integral step in preparing research results. Expanding access to data management services that include programming will greatly impact the efficiency, accuracy, and productivity of research at UCSF.  By leaving complicated programming tasks to the experts, researchers will be able to spend more time writing grants and developing manuscripts.

Plan

Most data managers on campus are in the Analyst, Programmer Analyst or Biostatistician job title category.   With the help of UCSF’s Operational Excellence Human Resources group we would first survey all UCSF staff within these job classifications to find data manager and biostatistical programmers who would fit well into our Data Management for Research Community.   The two main inclusion criteria would include those with 1) data management/programming expertise and 2) who work in a research environment.  Those who fit the criteria will be invited to join the Data Management for Research Community.  Membership would provide access to data management resources and a subscription to a quarterly Data Management for Research Community newsletter.  In return, we would require that each person fill out a brief Data Management questionnaire to help identify his/her data management expertise as well as a follow-up questionnaire to assess satisfaction with becoming a member.  Specifically, the Data Management questionnaire would ask questions regarding the programmers area of research and UCSF department, level of experience in working with large national databases, clinical trials, observational studies, cross-sectional studies, etc., and level of expertise in various data management and statistical programs, including SQL, VisualBasic, FoxPro, MSAccess, SAS, Stata, SPSS, excel, etc.  We would also ask questions regarding his/her preference for different levels of engagement, such as, email, conference call and/or in person meetings.  Through this process we hope to create a network of managers/programmers who are able to work with the DMU, an inventory of data management resources, and opportunities for providing data management services on a short-term, variable-effort basis to those who need it.    

Total Budget: $25,343  

A 20% Analyst I for 12 months will be needed to survey staff, set up the questionnaire in RedCap and compile/store inventory of resources.  A 5% Programmer Analyst for 12 months will provide content to the newsletter and design the questionnaires. 

Criteria and Metrics of Success:  Number of data management programmers identified, overall response rate, rate of membership enrollment, inventory of data management resources, number of data managers who are able to provide services and satisfaction with membership.

Collaborators:

Janet Coffman, MA, MPP, PhD and CELDAC,
Laura Bettencourt and the San Francisco Coordinating Center

Joe Hesse, Department of Neurology

 

Comments

This project would help to address a need at UCSF that existing CTSI initiatives are not fully meeting. For example, the Comparative Effectiveness Large Dataset Analysis Core is a good resource for identifying large, secondary data sets but has only limited resources to help faculty and trainees manage and analyze these data sets. CTSI has made great strides in helping faculty and trainees identify faculty with common interests but could do more to help them find staff with expertise in data management. Creating a network of data managers/programmers could help to link faculty and trainees to staff who could work with them on their projects. Such a network might also help UCSF retain talented data managers/programmers by helping them to find opportunities to continue to work at UCSF if they lose their funding. Retention is especially important for data/managers programmers with expertise in complex types of data sets, such as health insurance claims.

I would love to see this expanded to include a monthly in person meeting for research data management similar to the IT monthly meetings for computer support coordinators. I would think an analyst with 5%-10% effort could do a really good job of establishing this.

Great idea. This will be especially useful for young/junior investigators to find help! Go for it! J

This is a very innovative, practical idea that will have a significant impact on improving the ease and accuracy of clinical research. Developing this core group of experts will allow for an easily accesible resouce for junior and senior faculty managing a diverse range of datasets. These data experts are highly trained, but sometimes difficult to connect with through our current UCSF research structure. Professional development and support of this uniquely qualified group is critical!

This is a great idea, for both investigators and data management staff.

I recommend that you consider including staff biostatisticians in your survey. It would be very helpful to identify those individuals as well. A separate community could be set up or integrated into the one you propose for data managers. Overall, a great idea.

Consultation Services (CS) is very supportive of this idea. Data managers are some of the unsung heroes on research teams, and providing a way for people who do this work to get together, share tools, insights and projects, and improve both research data management quality on campus as well as morale and job satisfaction is really concordant with CS (and CTSI!) goals. There are some challenges here: we've tried in the past to foster an online community for stat analysts and been unsuccessful, but this may have been due to the collaboration tool we used. Your methods and goals are going to be different. The challenge, as always, will be to incentivize people to spend their time networking and participating and contributing to the community.

As we are implementing APeX, our department is faced with a decision whether to try to incorporate collection of patient reported outcomes into MyChart and perhaps incur extra charges from EPIC down the line to be able to access these data for quality improvement activities, or to try to implement a REDCaP solution. The problem with REDCaP is that we cannot limit access to only a single patient so we cannot use this in the clinic. There is the potential that a patient will see another patient's data, which is a HIPAA violation. If we could solve this problem, it would be a great alternative to having to depend on EPIC to capture the data needed for condition-specific registries.

Great idea. One of your challenges might be where you get the 20% time (which Analyst I) and the programmer/analyst -- although for the latter, you might actually invite someone from the data management community and have them be supported at 5%. I think starting with a newsletter, and yes, potentially a recurring monthly meet up as Joe suggests, would be great. But you might also think about extending this more traditional approach with at least a listserv, or perhaps a linkedin group, may be useful as well. Also coordinate with the Virtual Home team just to explore if there's any way the expertise mining system and any other groupware tools they're thinking of might be tested for this community.

We have an analyst I at the WHCRC who would be available. I agree listserv or wiki may be sufficient. I worry that monthly meetings may be too time consuming and a burden. Effective electronic communication may be enough. I am hopeful that the survey would be able to gauge a preference for various levels of engagement, such as, newsletter, meetings, etc.

I also like this idea. Had a few questions regarding implementation and ongoing care and feeding of the community for consideration. 1. Is the main goal for the DMU to have a pool of resources to recommend for projects that come through CS? Or would investigators access the membership directly? 2. I'm not sure how busy and how variable the workloads are for these folks. Will it be difficult to maintain and access this pool of folks due to their varying availability? Or will there need to be some way for members to easily update when they are free and when they are not? (this by the way, will be a challenge unto itself) I.e., if there is a need for a certain expertise, and there's no way to tell if that member has time available, will it be frustrating for the DMU and investigator? 3. If successful, sounds like there would be the need for a for ongoing solicitations of members. Would this become part of the DMU's purview? What happens if someone wants to join, but they aren't quite in the sweet spot of the group? Is membership exclusive? 4. Is the goal to have a pool of experts that the DMU can call upon for projects, or is there another goal to create a "virtual community" for these folks to access each other's expertise, resources, etc. I agree with Mark on past endeavors that ended up not working out that well due to various reasons. If the hope is to have members of this community engage with each other online, sharing resources, networking, collaborating etc. -- it's likely that there would be more work involved by someone who can "own" and "drive" engagement. Not that this cannot be done -- it can. It just takes someone to own it. That person would do stuff like what Joe mentioned -- organize the in person meetings among other things. In the end, I'm not trying to be a damper. On the contrary, I think the idea is great, and expectations just need to be set and clear for members, the DMU and investigators.

My current thinking is that the DMU would direct investigators to those resources. This could be handouts on common programming needs, an hour consultation with a STATA programmer or teaming faculty up with a programmer who is able to do the data management work for them. The DMU is already doing this to a limited degree. We simply hope to to expand the pool of resources by engaging the entire community of programmers across campus. We hope to gauge through the survey the availability for these short-term variable effort projects. I think the challenge right now is that we really have no idea to what degree programmers are available. Within our own research groups we know individuals who can devote time to other PIs projects but our hope is to widen this pool, especially because we are now providing greater access through DMU CS. I imagine this would work similarly to how the biostatistics consultations work by asking around until you find someone who is available and who fits the needs of the project. The membership will by no means be "exclusive" but we do hope our membership is directed to those who would benefit most -- ideally research analysts, programmers and/or statisticians, who are working within a group or independently. The direction of communication could go both ways. It's possible that PIs reach out to us because they have a gap in funding and don't want to lose her programmer entirely because they are unable to provide them with 100% support. Maybe the "Community" is as simple as a listserv. We'll have to work with Virtual Home to explore our options. The person who coordinates this community will reach out to members but active engagement by all members is not a requirement. I agree one person is needed to coordinate communication. It is to be determined whether this takes on other forms, such as, periodic meetings. Thank you for your comments/concerns. I hope I've provided more clarity :)

I think this is an important idea. I would want to ensure that a plan for ongoing sustainability of this workgroup could be built into the plan, perhaps in the form of incentive for participation in the group? Also, there will be an ongoing need to maintain the resource inventory and keep this community connected over time.

There would be two main incentives for membership 1.) being informed and 2.) accessibility to DM resources (programs, SOPs, macros, people). We should definitely speak to Virtual Home about options for maintaining the inventory of resources -- perhaps, something like a wiki.

A considerable amount of the CELDAC initiation phase was devoted to a similar survey of databases on the UCSF campus. Was CELDAC able to gauge the willingness of principal investigators to share these resources with the UCSF community?

Nice idea! It would seem that this group should take full advantage of the scheduling, feedback, and billing from consultation services. However, it will be important to also think about reimbursement. The model might be different here since these folks are often on others' grants and buying time away isn't trivial. Regardless, easy to resolve.

I think it would be great to use consultation services' tools. Most data managers are staff and this does present a problem with reimbursement. I wonder if the recharge mechanism in place could reimburse the grants that are supporting these data managers? Having said that, I'll leave the funding hurdles to the experts.

Commenting is closed.

Improving CRS performance through application of Lean/6 sigma

Type: 
Topics: 
Proposal Status: 


1.     Rationale – the science of operations.  

The “clinical research enterprise” faces 2 simultaneous and daunting challenges. It has to both contribute to filling the innovation gap in healthcare while at the same re-engineer itself to become more efficient & integrated across all scientific and social disciplines involved along the translation continuum. It is very much like trying to modify an aircraft while flying in it. Lean/6 sigma is one methodology among many others that can help deliver on this dual challenge. Indeed, this “science of operations” is grounded in robust data analysis to relentlessly reduce waste and variability. It also relies heavily on team-work and open lines of communication between functions, which is a conditioning factor for innovation. Since 2000 it has been increasingly used in hospital settings, with sometimes spectacular results on cost and quality of care delivered (see1 for references on improvement on mortality rates and waiting times).  It is now being tested in a translational setting2.  Liu (2006) describes an application of Six Sigma methods to achieve a reduction of 70% in cycle time for entry of case record forms in a phase III clinical trial, while maintaining a statistically acceptable error rate requirement3. Lean techniques have also been applied to streamline the drug discovery process in the preclinical phase of research.

The goal of this project is to demonstrate the potential of lean/6 sigma to a wide UCSF audience by applying it to the Clinical Research Services (CRS) program to effectively manage its overall performance, and improve Quality and Costs in areas where it is needed. CRS is indeed ideally located at the intersection of clinical and research care. It therefore provides an ideal laboratory to investigate how lean/6 sigma can help the clinical research enterprise transform from an “End-to-End” perspective.

2.     Plan & Scope.

The project will encompass the overall CRS program and will be executed by following a typical lean/6 sigma structure or DMAIC (Define, Measure, Analyze, Improve, Control) over a 12-month period.  The project scope includes an initial End-to-End process map and gap analysis of all services provided by CRS to further highlight priority areas based on feasibility and ROI. The End-to-End process map will start with the initial PI request for CRS services and will end with their successful delivery.  From the initial assessment, detailed data collection plans, analysis, and solution proposals will be drafted and presented to the appropriate stakeholders to obtain endorsement for the chosen solutions.  Below are the strategies and expected deliverables per DMAIC phase that will be followed to conduct the initial and final assessments across CRS:

  • Define Phase (2 Months):
    • Intent: Define the problem statement and associated quantitative success criteria (i.e. Cost, Time, Speed) within the defined scope via an End-to-End process map.
    • Deliverables:  Project Charter, Voice of the Customer (VOC), High-level Process Maps (i.e. SIPOC), Communication Plan, and Project Plan.
  • Measure Phase (2 Months):
    • Intent:  Measure current performance on the previously determined success criteria across CRS.
    • Deliverables:  Data Collection Plan, Measurement System Analysis (MSA) – As needed
  • Analyze Phase (2 Months):
    • Intent: Identify potential root causes by conducting correlation and root cause analyses.
    • Deliverables:  Stratification, Ishikawa Diagrams (As Needed), Correlation Analysis (As Needed)
  • Improve Phase (4 Months):
    • Intent:  Present proposed solutions for identified gaps to the appropriate stakeholders.  Based on the endorsed options and available resources, implement chosen solutions. 
    • Deliverables:  Gap Analysis, Proposed solutions and prioritization criteria (i.e. Time, Resources)
  • Control Phase (2 Months):
    • Intent:  Ensure the sustainment and effectiveness of the solutions implemented
    • Deliverables: Control Plan, Service Level Agreements (SLA), Training (as needed), Change in Policies/Operation Procedures (as needed). 

3.     Criteria and metrics for success

Anticipated success for this project is to generate improvements that allow CRS to decrease its total program costs by 5% (~ $750k). The rationale for this goal is to at least offset the 5% budget cut from the 2012 budget. Another success goal could be to align the improvement efforts with the long-term strategy of CRS, which is to increase its revenues. No target can be reasonably set on revenues at this moment but could by the end of the Measure phase.

4.    Total Budget: $73,980

The anticipated cost of this project is $73,980 to support a Project Lead at ~ 58% of her/his effort 

5.     Collaborators

From PET: Adel Elsayed, and from CRS: Eunice Stephens (ops manager), Wendy Staub (sample processing lab manager), Cewin Chao (Bionutrition Director), Kathy Mulligan (Metabolics director), Danusia Filipowski (Clinical Coordinator Core Dir), Nariman Nasser (Participant Recruitment Core Dir), Deanna Sheeley (Research Nursing Core Dir).

APPENDIX

 

1.     Example of lean/6 sigma results in hospital settings:

  • St. Joseph’s Hospital changed the ER patient flow, allowing the hospital to treat at least 10,000 more patients annually. – Tampa Bay Business Journal 
  • The Pittsburgh Regional Healthcare Initiative cut the amount of reported central line-associated bloodstream infections by more than 50%. The rate per 1,000 line days (the measure hospitals use) plummeted from 4.2 to 1.9. – ASQ.org (American Society for Quality) 
  • H. Lee Moffitt Cancer Center and Research Institute is expected to increase procedural volume by 12%, which will add nearly $8 million annually in incremental margin. – Tampa Bay Business Journal 
  • A large metropolitan hospital system reduced inpatient transfers by 75% and has $2 million annual cost savings. – iSixSigma.com 
  • A top-five hospital system used Lean Six Sigma to redesign its transplant unit and as a result improved patient satisfaction by 50% within three months; the cost of care was reduced by 15%. – Quality Digest
  • St. Vincent Indianapolis Hospital made a 78% cut in the number of steps emergency department nurses take to get supplies. – USA Today
  • A major hospital in the United States was able to reduce inpatient mortality rates by 47.8%. – iSixSigma.com
  • North Mississippi Medical Center reduced the number of prescription errors in discharge documents by 50%. – ASQ.org(American Society for Quality)
  • The Mayo Clinic’s Rochester Transplant Center reduced the cycle time from when a new patient made initial contact to setting up an appointment from 45 days to 3 days. – iSixSigma.com

2.     The Applicability of Lean and Six Sigma Techniques to Clinical and Translational Research, Sharon A. Schweikhart, Ph.D. and Allard E Dembe, Sc.D.The Ohio State University, College of Public Health, Center for Clinical and Translational Science, Center for Health Outcomes, Policy, and Evaluation Studies
3.     Lui EW. Clinical research: the Six Sigma way. J Assn Lab Automat. 2006;11(1):42–49.

 

Comments

Overall, I believe the systematic and analytical approaches used in Lean/6 Sigma can definitely be useful in improving CRS overall performance. I just had questions about your metric for success. First, I wasn’t quite clear what #1 is referring to. However, I think your case will be stronger if instead of 5%, you gave numbers (e.g., actually time saved/year (in #1) and dollars saved/year (in #2)). In addition, giving the PIs and CRS employees a survey before and after the analysis would also be a stronger gauge of staff satisfaction. Good luck!!

Thanks for the comments and great points! Metric # 1 refers to the duration between the initial request to CRS by a PI to implement a protocol, and the start of the implementation of such protocol. In essence this is the duration to prepare for the implementation of the protocol. However a more accurate definition is needed, for both metrics. Precise metrics definition and a robust data collection plan are indeed a central part of early phases of a 6 sigma project ("Define" and "Measure"). At the end of these phases we may actually conclude that other performance metrics should be chosen. The "%" is a starting point, but I fully agree: we will need to translate any achieved improvement into time and $ saved. Thanks for the contribution.

This is a great idea, but I think the scope of work planned for one year is far too broad unless you plan to add a significant number of new staff to this project. Even if you add new staff, the staff of the CRS will need to be heavily involved to educate the lean consultants on CRS procedures and regulations. I think it would be more feasible to begin with a defined segment of CRS activities. I believe such a "pilot study" will give you a much better idea of the scope of the full project, and provide a lot of information to help you and the CRS staff move forward efficiently.

The timing of this proposed project is perfect for CRS as we implement a new business model and a functionally-based organizational structure. A first exploration of the possibility for improvement has been launched with the Sample Processing Core Services lab. The lab manager, Wendy Staub, and her lab staff have participated in a workshop to identify areas for improved efficiency and elimination of waste. The group was actively engaged in the process and this first step has identified areas for improvement that can be incorporated into the Balanced Scorecard and can help with the development of more impactful metrics for the core service. CRS core managers have each had a first meeting and discussion about the challenges they face in delivering core services. These early meetings identified key areas for improvement. The more focused work with their teams that would be supported by this proposal will engage the managers and staff in identifying specific strategies for implementing and measuring improvements. As administrative procedures are put in place to support the new financial model, this process will allow us to look at the continuum of services and activities from the perspective of our investigators and sponsors, incorporate the feedback from the staff doing the work, and ensure that our new processes are as efficient as possible.

We have already done a pilot of improved efficiency though not with these proven strategies. This CRS service is the perfect place to start and this is the absolute ideal time to do this - a time when we have fresh and talented new leadership in the CRS and in PET and when we need to markedly improve our efficiency as the demand for our services continues to ramp up. The "costs" of not doing this include (1) a significant delay in reaching CRS's entire cost recovery strategies and ultimately the fiscal viability and sustainability of the CTSI itself, and (2)a less than positive message to the leadership and staf of both CRS and PET.

Lots of potential here. I will echo Deborahs scope point (ie. focus the proposal on a particular process or set of processes), and I agree with Dezba that the metrics don't feel useful as a '%' reduction, instead $s saved or FTEs reduced feels more useful in assessing the return on the investment. Since you describe a 70% reduction in time to process cases in the paper you reference, a 5% decrease feels a bit like a let down - you just need to align the scope of your project with expected reduction in the short term, and clarify how that is but one step to a broader effort that then might have greater benefits down the line.

This is an interesting approach to increase productivity and reduce cost. A positive outcome and subsequent wider adoption would strengthen the clinical research infrastructure.

This is an area of great interest to me as my degree is in IEOR which predates and is the origin for many of the Six Sigma concepts. I think it is a great idea and I would gladly lend my support to it. I would also suggest contacting the UC Berkeley IEOR department to see if they would have any interest in getting involved. I suspect they would be, as UCB in general has been very supportive of our activities here. I think that this collaboration would augment the approach quite well as both sides bring expertise unique to each entity.

Jeff - thanks for your interest and I'd be happy to check with UCB IEOR! Industrial Engineering best practices could indeed be very helpful in many areas, such as patient and protocol implementation scheduling for instance. I look forward to the exchange!

Deborah and Mini - thanks for your comments. Deborah - I updated the proposal to reflect the fact that the D, M and A phases should be done holistically, but that the scope of the I and C phase will be focused on select areas to be mindful of the limited resources we have. Also - by design a lean/6 sigma "Improve" phase does include a piloting step prior to full scale implementation. So your concern about pilots would be adressed. Mini - the main rationale for selecting a % reduction target is to offset the 2012 budget reduction. I modified the proposal to make it clearer that the goal is to reduce program costs by 5% to offset the 5% 2012 budget reduction. This translates into ~$750k of savings. We could set more aggressive targets but we may then need more resources. I removed the "time" target for now. Another success measure would be to target a certain amount of revenue increase, which in the end is the ultimate strategy of CRS. We could indeed consider that CRS is transforming into a self sustaining business...With the implementation of the recharge model we should get a better idea of the needed revenue increase to balance the budget out!

Commenting is closed.

The Immunological Microbiome Project: Collaborative Microbial database for the prediction of disease and immunological research

Type: 
Proposal Status: 

Rationale – The mammalian gastrointestinal (GI) tract contains hundreds of distinct species of commensal microbes under normal conditions, referred to as the ‘microbiota’.  Recently, it has been appreciated that the microbiota exists in a mutualistic relationship with the host immune system. Mutations affecting the production of certain cytokines or resulting in the lack of immune cell subsets are known to increase the overall bacterial burden; however, how underlying immunological conditions influence the colonization of specific species of commensal microbes is poorly understood.  PhyloChip, a recently innovated DNA microarray, is able to rapidly identify over 60,000 microbial taxa in samples of interest.  Using this techonology, we can ascertain the exact effects of diverse immunological phenotypes on the microbiota of the GI tract. Currently, these chips are used extensively at UCSF by the lab of Susan Lynch, though their use has thus far been limited to particular diseases and related environmental studies. We believe that the development of a microbial database encompassing many different immunological states is necessary to better understand the true relationship between GI microbiota and the host immune system. Our first aim is to dissect the unique feature of the ‘microbial profile’ (fluctuation of specific microbial species) in fecal samples obtained from various mouse models of disease including: virus infection, malignancy, arthritis, colitis, asthma, dermatitis and diabetes, as well as genetically modified mice with impairments in immune factors such as cytokines, chemokines, transcription factors and variety of immune receptors (a vast number of them are available in the UCSF mouse inventory). Second, we will compile this information and build the Microbial Database as a web-based open resource for the scientific community. Gathering a microbial profile from each disease state or immunological phenotype will allow us to predict the impact of immune perturbations on the specific make-up of the microbiota and the potential impact on disease.  This project is also an attempt to investigate the potential of PhyloChip as a novel diagnostic and prophylactic tool.       

Plan – We propose following step1 to 3 to accomplish this pilot study.         

Step1 - Standardize the sample collection.  The use of mice provides some stability in terms of mouse to mouse variation.  However, commensal microbiota is very sensitive to age, sex, diet and the environment in each animal facility.  To minimize these confounding effects, we will employ co-housing method.  The mice of interest (designated as tester mice which are genetically modified or disease mice) will be housed in the same cage with control mice (which are wild-type B6 mice raised in our animal facility).  In the case of infectious disease models, tester mice will be orally administered with the feces homogenate collected from control mice before infection.  This method will equilibrate the microbiota in gastrointestinal tract and allow us to dissect the pure influence of specific gene or disease on the colonization of microbial species.  

Step2 - Analyze by PhyloChip.  As a pilot experiment, we will collect fecal samples from 20~30 of the most commonly used genetically-modified mice maintained in our lab or available in UCSF mouse inventory (Rag1-/-, Rag2Il2rg-/-, mMt, b2m-/-, MHCII-/-, Tcra-/-, Tcrg-/-, Lta-/-, Fas-/-, Cd28-/-, Dap10-/-, Dap12-/-, Myd88-/-, Trif-/-, Ifnar1-/-, Il2-/-, Il4-/-, Il6-/-, Il10-/-, Il12rb-/-, Il15-/-, Il17ra-/-, Foxp3-DTR) and major experimental disease models studied in our lab or neighbor labs (Carcinogen-induced tumor, Experimental colitis, Type I diabetes, Listeria infection, Cytomegalovirus infection, Influenza infection, OVA-induced asthma).  All samples will be processed by PhyloChip and we will perform comparative and phylogenetic analysis to reveal the effect of immunity on the all level of microbe’s taxon. 

Step3 - Build pilot database.  All accumulated data will be computationally reconstituted.  We aim to build the data browser with which users can interactively explore the microbiota profile for particular gene or disease. 

Criteria and Metrics for success – 1) Identification of the unique microbial profile in each sample.  2) Launch microbial database.  We will share our data with other researchers all over the world who are interested in the microbiology and immunology, and will accept their feedback and suggestions to improve user-friendliness.  Once these criteria are met, we will move to scale up the size of database.  We will collect samples from another ~50 genetically-modified mouse strains and accept samples from other collaborators.  Ultimately, our approach can be applied into human diseases. 

Estimated cost - We are requesting $40,000 for a pilot study (Step1 and 2).  To build-up public database, we will need $30,000.  For scale-up database, it will need $30,000.

Collaborators - Shoko Iwai (Susan Lynch lab, Dept. of Medicine)

Commenting is closed.

Encouraging research collaboration and integrity by optimizing post-publication peer review for researcher participation

Type: 
Proposal Status: 

Rationale

There is an integrity crisis in the life sciences. Since 1990, retraction rates for peer-reviewed medical research have increased by a factor of 4 (Cohol (2007) EMBO 792-3). This crisis highlights how traditional mechanisms of peer review are straining under the ever expanding volume of research being produced.
Post-publication peer review offers a promising solution: researchers could aggregate their opinions online to create an alternative metric of integrity. A number of systems have been built to allow this kind of peer review, but none have succeeded in engaging the research community.

Plan

Our team has spent close to a year conducting research in labs on the way that papers are read and discussed, and has created an alternative system which generates significantly more traction. The system is based around figure-by-figure summaries of a paper, soliciting comments on individual figures rather than on papers as a whole.
Initial tests with this system have found extremely promising improvements in both reading time and discussion rates over competing platforms. We are seeking funding to refine the system for launch across and beyond UCSF, and to cover the costs of user engagement.

Criteria and metrics for success

As a web-based application, we will have access to a plethora of data on user behavior. We are seeking to optimize around the following metrics:
 -In order to establish that we becoming a useful part of scientists’ workflows, we will seek to maximize the
number of papers viewed on our system per user per week.
 -In order to establish whether we are serving as a platform for meaningful discussion, we will seek to maximize the number of comments per paper view, and the amount of response (replies and upvotes) per comment.

Approximate cost and very brief justification ($10k-max $100k)

We estimate that launching the project will require a budget of $250k in 2012. The majority of this sum goes to salaries for web development and community management staff. We are seeking $100k to supplement our other sources of funding.

Collaborators

Core team:
Robert Judson- Completing PhD in UCSF BMS program. Focus on complex genetic networks & stem cell biology
David Jay, MBA- Consultant focusing in online community development, web developer.   

Core Advisers:
India Hook-Barnard, PhD- Program Officer, National Academy of Sciences
Keith Yomamato, PhD- Vice Chancellor for Research, UCSF

Comments

Quite interesting. A couple of things to consider - a. If integrity is the issue, then replication based on openning up access to the data underlying figures is key. We're doing work at CTSI(with a few partners) to see if we can catalyze the open data/open science piece, so would be useful to understand the other 'movements' your idea ties to, such as this one. b. For this kind of a proposal, you'll want to share with us what you've done thus far and any data that shows you have traction already (e.g. a platform that UCSF grad students are already using, or that is being used by X# of journal clubs). Feel free to add attachments, or reach out offline. c. You should explain how what you're doing is different from PLoS approach to enabling online discussion and annotation around articles. How successful do you think that has been (or any other such effort by Elsevier, Science, etc), and what are the implications for your project?

Excellent feedback! To respond to your questions: We’ve getting as active as we can in the open access/open science movements (it’s why we’re doing this). We’ve organized a meetup group of open science stratups in the Bay area, and are working with staff from PLoS to plan an Open Access Hackathon in June. We’d love to get further connected to open science advocates at UCSF. Our alpha is currently being used by one lab, though we’ll be expanding to more (and beyond UCSF) in the next few months. Our system is different than PLoS (we tie comments to figures rather than to entire papers or lines of text, and there are a few other differences), and is getting considerably higher traction.

Reposting here in case you don't get notified I responded: I love your idea. I think it provides a public forum for discussion on articles and figures that is missing. Similar to news articles and other forums, having the ability to discuss outside of journal club/conferences but across institutions and across the field I think is a great way to critically evaluate papers. Additionally, I think having these comments available for those reading the paper would be helpful who may not have as much experience (ie students/junior investigators) or those who do not have the benefits of discussing the paper in a conference/at journal club. I think one challenge is figuring out how to organize the comment threads so that the comments themselves can be easy to browse through and can serve as a resource as well. On a side note it would be great to figure out how to get free access to the articles if you are not at a university without having to pay 30-40 for the article - I think the limited access also makes the work less accessible and less valuable. Would love to chat more if you have time the next few weeks. I could see us working together to figure out how to gather and evaluate the evidence and then use for a variety of purposes. -Dennis

Thanks for your feedback! We're hoping that our system will be able to generate open access summaries of papers (and help convince journals to make their papers OA, but that's a longer term goal.)

Commenting is closed.

Using Mobile Technology and Game Dynamics toRecruit and Retain Research Participants

Type: 
Proposal Status: 

Rationale:  Enrolling subjects for longitudinal research studies, ensuring their compliance and retaining them over a period of time pose a significant challenge to researchers. As an example, we had 62% and 30% retention at years 1 and 3 in a recently concluded study on knee osteoarthritis. To overcome these challenges and to optimize workflow associated with data acquisition, quantification, we have developed a design which combines multiple computing platforms to create a seamless, user-friendly experience for all participants (researcher and subject).  The platforms [using  Ruby on Rails (RoR),  PostgreSQL, iOS  and representational state transfer (RESTful) API]  are designed to enable researchers and subjects to complete data acquisition forms and surveys on tablets which then get quantified and submitted to our cloud servers. Qualitative data show improved subject satisfaction compared to paper forms, reduced paper use by researchers and study coordinators, reduction in man-hours by researchers and study coordinators to manually quantify survey scores and input data into the database and better quality control.

Based on these initial experiences using mobile technology for enhancing subject’s participatory experience and streamlining associated workflow, we propose – (1) the expansion of our current platform to UCSF Imaging clinics (for recruitment and improving care experience), (2)  a one year research study on obese individuals examining effect of weight-loss on knee tissue health using quantitative imaging. This will enable us to analyze the efficacy of the platform towards maintaining compliance and increasing retention. (3) Additionally, we also intend to pilot a networked module incorporating social gaming and feedback principles to assess behavior modification in this population which is of special interest to our lab for our research on knee osteoarthritis prevention.

 

PLAN:  Recruitment component: (1) Over the 1st three months, enhance the current tablet computer applications to include    interactive versions of intake forms, standardized questionnaires, as well as, modules which simulate the procedure to be performed (MRI, CT, radiography) to address common issues like claustrophobia, restricting movement during scanning etc.. This module will also offer the patients to learn about unique capabilities of the UCSF Imaging centers (safety regulations, metal suppression sequences, quantitative imaging capabilities, other research studies which offer volunteer opportunities) along with a brief bios of the personnel who will be interacting with the patients during their procedures.  While the patients learn about the UCSF imaging capabilities, they will have the option to explore ongoing research participation opportunities  and, if interested, sign up to be contacted by a study coordinator. We have multiple longitudinal trials underway at the moment which will benefit from this mode of recruitment. Possibility of using mHealth frameworks implemented at UCSF will be explored to allow for future adaptability across other UCSF departments. (2) Over months 3-9, administer the tablets to all patients at the UCSF imaging centers (Orthopedic Institute, China Basin Landing and Mission Bay) scheduled for musculoskeletal imaging for clinical or research purposes.

Retention and gaming component: (1) Over months 4-12, twenty-five adult obese subjects from the UCSF- Weight Management Program will be recruited. These subjects will undergo MR imaging at baseline and 6 months, 9 months for quantification of knee tissue health using our standardized metrics. At each time point, subjects will complete surveys related to their levels of physical activity, nutritional data, and incidence of musculoskeletal pain/injuries every month. Additionally, the subjects will complete surveys  remotely using mobile applications (described later) every 6 weeks. (2) Over the 1st three months, develop mobile phone applications for surveys and a “gamified” feature where the research participants can track their scores (levels of physical activity, functional status, pain status, calorie intake) and compare it to other participants, promoting increased participation of physical activity using competitive motivation. A virtual character will be designed (eg: an alien trying to reach home planet) and the participant  will need to complete all study associated tasks, as well as, maintain a certain level of physical activity, consume the recommended diet to enable the character to succeed at the game. Periodically, information on the benefits of the interventions will be incorporated, feedback on the performance with resulting improvements will also be provided and quizzes on weight-management will be incorporated. These approaches have been shown to be effective in management of diabetes, improving physical activity in children and improving walking capacity in adults to cite a few (1-3).

Long term plan: This pilot will be used to demonstrate feasibility, efficacy and flexibility of the platform using UCSF Imaging clinics and research. In the next phase, an NIH application will be submitted for a larger, longer research project where the platform will be implement in obese individuals undergoing surgical and non-surgical weight-loss and followed-up for four years. Eventually, the platform will be made available to all UCSF researchers interested. Recruitment and clinic modules will be made available to other interested UCSF departments immediately. 

Criteria and metrics for success: (1) Percentage of participants enrolled in research studies from the UCSF imaging centers (currently unknown) (2) Clinic patients and research participants will be invited to complete an online survey assessing their satisfaction with their experience, likelihood to stay in the research study and likelihood to return to UCSF for future appointments. (3) Retention in the research studies over a one year period will be compared to retention in all of our other longitudinal studies. (4) Impact of weight-loss on changes in composition of knee articular and meniscal cartilage using quantitative imaging, intramuscular fat content of thigh muscles, levels of physical activity, amount of weight-loss will be assessed at one year.

 

Approximate Costs and Justification: Total Budget = $95,350; 2 Tablet computers for development @ $400 ea. = $800; 20 Tablet computers for deployment @ $400 ea.    = $8,000; 15% effort from the PD/PI  (Dr. Kumar)  = $8,550; 25% effort from QUIP-C software developer over the year = $25,000; Consultant fees for gamification of the applications =  $25,000; Partial costs for MR imaging of research participants (shared with other grants)  = $28,000

 

Collaborators:  Deepak Kumar (MQIR); Vivek Swarnakar (QUIP-C), Sharmila Majumdar( MQIR), MQIR), David Dean ( QUIP-C), Richard Souza (MQIR), TBD(Game developer)  

    

REFERENCES:

1)      Aoki N, Ohta S, Masuda H, Naito T, Sawai T, Nishida K, Okada T, Oishi M, Iwasawa Y, Toyomasu K, Hira K, Fukui T. Edutainment tools for initial education of type-1 diabetes mellitus: initial diabetes education with fun. Stud Health Technol Inform. 2004;107(Pt 2):855-9.     

2)      Southard DR, Southard BH. Promoting physical activity in children with MetaKenkoh. Clin Invest Med. 2006. 29(5):293-7.

3)      Consolvo S, Everitt K, Smith I, Landay JA. Design requirements for technologies that encourage physical activity. In: Grinter R, Rodden T, Aoki P, Cutrell E, Jeffries R, Olson G, editors. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; 2006 Apr 22-27; Montreal, Quebec, Canada. New York: ACM; 2006. p. 457-66.


Comments

I love the idea of implementing what we know about reward processing to encourage patients to participate in research studies... and mobile technology seems like the perfect place to make this a reality. I don't know who your target population is (seems potentially very diverse), but I think this approach would be extremely effective with adolescents (who I work with in school-based settings). Would be happy to collaborate if this population is of interest!

Hi Kaja, Thanks for the comment ! The adolescent population happens to be one of our target populations for an upcoming project. We are looking to implement similar concepts based on mobile technology, game dynamics and social media as a part of this community based project to help answer our research questions. I will get in touch with you to discuss potential common interests !

I really like the idea of having a patient-facing tablet or kiosk-based system that can collecting self-reported questionnaire data in the waiting room. I could see parts of this model as potentially generalizable to the ED or urgent care setting. I believe that Ralph Gonzales and his collaborators in the Screening and Acute Care Clinic has incorporated a self-report registration system that includes simple questions for UTI screening and treatment at triage, but the ability to link longitudinally for follow-up data collection could be very useful for clinical research.

Thank you for the comment Anthony. The concept is definitely generalizable to other settings and would eventually be. Since we have easiest access to Imaging center clients, we have proposed initial development and deployment on this sample. We already have a lot of our research questionnaires on tablet computers which quantify and communicate the data to our cloud servers with immediate access for researchers. I will look into what Ralph has at the Acute care clinic.

I love this idea. It would be a great fit for collaboration with the "Digital Clinical Research Center" (see other proposal) - could have gaming modules that encourage longitudinal participation in the dCRC generally, and others that could be easily adapted and plugged into individual projects using dCRC patients and resources. Are envisioning easy adaptability to other projects with a web-based presence?

Thanks Mark ! I saw your comment on the dCRC as well and I think collaborating would be perfect. Initial prototype development and deployment is to be done on Radiology patients and participants but yes, easy adaptability and providing the tools to the UCSF community is the key to the success of this proposal over long term.

I love the idea of using mobile technology and gamification to help streamline the questionnaire/intake form process, and enhance the overall research participant experience. I think ideas in this proposal are very relevant and have tremendous potential. To echo some of the other comments, I can see this being applied to many other areas of research and medicine. I am also very interested in the longitudinal component. I think it would be great to see what kind of long-term impact mobile technology and gamification can have (in your case: subject retention rates and compliance). Very cool!

Thanks you for the comment ! Gamification has shown tremendous potential as can seen by the success of ventures like healthTap and Keas. we hope to see similar efficacy in research !

Great idea. I feel there is a need for more transparency to help demystify clinical research to both patients and study participants – and this project definitely proposes to accomplish this. In addition to the interactive environment, I also think it may be helpful to embed other forms of media in the future, such as videos depicting imaging procedures and current research that is being done.

Thanks Justin ! Having videos would be great. UCSF-Radiology has a youtube channel which already has some videos for lay public to "demystify" some of the commonly used imaging techniques. We do envision using these modules as recruitment tools as well where the patients can opt to brows current volunteer opportunities as well as learn about on going research projects.

Interesting idea, that fits well in San Francisco. I think that there will be generational gaps, where certain age groups will have difficulty with the interaction and may not trust non-paper formats. For the younger population, this technology could be used across multiple environments - from the dentist to the ER. Within the research perspective, these technologies would hopefully improve retention through regular electronic feedback, reminders, and perhaps continued data collection after initial contact.

Like others, I think introducing new ways to educate and engage patients at point of care, to enhance both the care experience (and post care compliance) as well as likelihood of successful participation in research, is important. This kind of work though may be better designed if specific outcomes are identified and based on those, specific targets clarified for developing tools/apps/content. Having been involved in many experiments that use technology to change behavior, the ones that work are when specific needs are tied to clear content or tool delivery strategies. Currently this feels aspirational (great) but doesn't as yet feel like there's a workable plan with predicted impact on research. I realize that a page isn't much space, but if you can tie an assessment of outcomes on research to a specific tool/social media/content strategy, that will help the assessment of your proposal.

Like everyone else, love, love, love x 10 the concept! But also like Mini states below, I think this could be improved with more specificity and focus. 1. The application for ALL patients visiting the radiology centers is likely to be very popular, but this feels more like a clinical service/delivery application rather than one that is central to CTSI goals in training/research infrastructure/etc. If you are proposing to recruit patients to clinical trials through this mechanism, then that's a different story... and would seem fully in-scope. In which case, you should provide more details about how you propose to do this. 2. The research app is also very interesting. However, it is not clear what "gap" this tool will be addressing. If you could elaborate on what the current key deficiencies/limitations and barriers are with the current system, this can help drive the design and strengths of your propsoal. It can/should also drive the metrics you will use to measure the success of this innovation. For example, how big of a problem is study drop-out among participants visiting the radiology centers? 3. The gaming framework sounds very cool, but do you have a specific reason that you think this will improve some specific deficiency with the current system? Is there any previous experience or research in gaming you could point to that will help guide this specific application of gaming? 4. What is your business model for these tablets, particularly those for regular patients? Who will pay and why (after the research is gone)? 5. These will be heavily dependent on Med Center IT support... worth budgeting/partnering with them (if not already... not sure). 6. The longitudinal mobile app function for monitoring and feedback is a no-brainer. CRC or other relevant CTSI program should just start doing this with you now! 7. Why are you providing the tablets to only half of the sample? Again, great ideas and great area to apply IT solutions!

Great idea, I agree with Ralph that further defining the "gap" in the current system is needed and with Mini regarding targeting of specific outcomes. We have a three of these "kiosk" type apps deployed now and all of them have been very popular modes of collecting information. from an enterprise IT perspective, I am concerned about having these developed as "one-off" applications that don't interact with other UCSF systems or each other. We've painstakingly developed a framework to address interoperability while adhering to PHI and HIPAA requirements. Why re-invent the wheel? We're more than happy to share this framework for use by outside entities. My adivce is the sooner we standardize the sooner we can actually start building the type of eco-system we need to develop these applications as meaningful platforms for data collection and collaboraton.

For some reason the previous reply was posted as anonymous. I posted it.

Regardless of the proposal being funded, it would be great to know what frameworks exist on campus that would meet the needs identified in the proposal. Specifically a framework that in a simplified manner improves a patient's understanding of the complex medical procedure they would be undergoing, in this case radiological imaging and while doing this raising their awareness to ongoing clinical research options while they are visiting UCSF.

Here's a link that describes one such UCSF framework and some of the applications built on it. (http://www.ucsfmservices.org/home/) The UC MWF (Mobile Web Framework) is also in use here in the form of the UCSF Mobile app. When Tuhin Sinha was with the radiology team they had also created a very nice framework based on Ruby on Rails which is still in use. Each of the frameworks has their strengths. What we have found is that the design of the intervention and the creation of the content to be used by the apps are the tasks that take the longest not development.

Thank you for sharing the link. Some aspects of this proposal came from discussions I had with Iana Simeonov in the context of "Choose Your Poison" app. I believe it was built using the framework you referenced and we certainly plan on using the framework for this project. The simplicity and user-friendliness of that app makes it interesting as an approach to improve user participation and retention in clinical research. We wanted to also explore further improvements in recruitment while not incurring any additional marketing costs, hence the suggestion to try it in the clinical setting. The framework developed by Tuhin Sinha is quite actively used and we have further extended it on the data capture side. However the human interaction side of it needs further development, to better leverage mobile technologies.

I have similar thoughts as voiced by some of my colleagues previously. Great idea and great project, but I am missing/confused about some of the details of the research component. In terms of likelihood to stay in the longitudinal research project, I assume that this could be across a variety of possible projects, or are there 1 or 2 projects that you will be focusing on? You discuss game dynamics for a particular research project in terms of the participant's experience with being a subject in a research study. How is this going to happen? Are you and your collaborators going to develop this for each person's research project? I think I understand how the clinical intake forms could be translated to interactive versions on the tablet and that satisfaction with the tablets/process is one outcome.(Other outcomes could also be defined for this component of the work.) I am less clear about how the tablets are linked and integrated with other longitudinal research and who is doing the translation of that research (a lot of work) into the interactive module. Exciting ideas but some of this additional information would be quite useful.

Commenting is closed.

Column kit for analytical method development

Type: 
Proposal Status: 

As a core pharmacology laboratory for Center for AIDS Research at UCSF, the Drug Research Unit (DRU) often encounters problems regarding column selection during memthod development. Here I propose to seek for funding to buy 25-50 analytical columns as a column kit for method development.

Background (Rationale): Drug quantification requires optimal separation to achieve high selectivity and specificity. Different drugs may require different columns to achieve optimal separation and quantification. For a new method development, we generally choose a column used for existing assays at DRU, or based on similar methods from literature, which we often find are not the best choice, as most laboratories like ours may only have a limited number of analytical columns tested for method development. Here I proposed to purchase a set of columns covering different separation mechanism and column dimension. A general summary of different columns is listed as follows:

  1. Reverse phase columns: C2, C4, C8, and C18 columns.
  2. Normal phase and HILIC columns: un-derivatized silica-, amide-, diol-, cyanopropyl-, aminopropyl-, zwitterionic sulfoalkylbetaine-based columns.
  3. Hypercarb column: based on carbon graphite, retention mechanism is different from reverse phase columns and HILIC columns, and show good retention time for polar compounds.
  4. Ion exchange columns.
  5. Monolithic columns: Chromolith™ SpeedROD RP-18e.

Analytical columns have different particle size (1.8, 2.7, 3, and 5 μm). Particularly sub-2 μm columns are used for the UPLC system recently acquired by DRU through a RAP CTSI-SOS shared instrument award. Fused-core columns having particle size of 2.7 μm are suitable for the LC system coupled with the API5000. Columns with 3 and 5 μm particles can be used in all LC systems in DRU.

Analytical columns have different dimensions (2.1x50, 2,1x100, 2,1x150, 3x150, 4.6x50, 4.6x150). I proposed to acquire two dimensions for each type of column: 2.1x50 mm and 2.1x150 mm.

Plan of use: The DRU does not have the resource to buy a full range of different columns. If we can get fund to buy a column kit, it will benefit DRU and our collaborators at UCSF greatly. This column kit is only used for the purpose of column selection during early stage of method development. Only drugs in clean solvent will be applied to those columns. Once a column was identified for a new assay, the same column will be purchased for the later stage of assay development. Therefore, the column kit can be repeatedly used for any new assay development. The newly acquired UPLC system is open to co-investigators and others at UCSF on Monday and Friday. They can also use the column kits to develop methods. This is especially useful for them because analytical columns in non-analytical labs are very limited. It is not worth buying a new column if only a small number of samples are analyzed.

Budget: each column costs $400-700, average $550, we seek for $25,000 to secure a variety of columns for method development.

Collaborators: Collaborators (B. Joseph Guglielmo, Sunil Parikh, Katherine Yang, and Janel Long-boyle) and investigators who will use our core facilities.

Comments

Can you clarify who this investment will benefit? Also, what are other options for getting this funded? Wouldn't it naturally fit into a center or core grant, or into a project grant that included use of these methods?

It will benefit Collaborators and other investigators who will use the facilities. It is not possible for any project grant to include a budget to buy such a comprehensive list of analytical columns (~50 columns). Typically, a project grant will propose to buy a couple of columns and analytical supplies for a specific methods. However, to optimize a method, a large variety of analytical columns would improve the method development greatly. Columns from this column kit will only be used for column selection in early stage of method development, but not be used in sample analysis for any specific projects. Once a column is selected, new columns will be ordered for late stage of method development and samples analysis from the specific project funds.

Commenting is closed.

Developing an administrative data system to investigate the social and health impacts of early childhood programs in San Francisco

Type: 
Proposal Status: 

Rational: There is extensive evidence of pervasive social disparities in health, many of which take root in early life. The importance of early childhood in setting the stage for future health and social outcomes is buttressed by rigorous scientific studies across disciplines. However, what remains less investigated is how well this knowledge has been translated to public agencies responsible for implementing early childhood programs and family support services. The Department of Children Youth and their Families (DCYF) in San Francisco is one of the few departments in the country dedicated exclusively to meeting the needs of young people, and has made extensive investments in child physical and mental health programming and family support services (over 11 million dollars in 2010). However, the effectiveness of these programs in influencing key social, mental and physical health outcomes has not been investigated.  Through a collaborative effort of UCSF researchers, the San Francisco Mayor’s Office, the Department of Children Youth and their Families (DCYF), the San Francisco Department of Public Health (SFDPH), and the San Francisco Unified School District (SFUSD), we propose to develop a linked administrative data system that will allow us to track and evaluate the impact of early childhood programming and family support services on academic achievement, and mental and physical health.

Plan: We propose to develop a data linkage between the DCYF and SFUSD systems, with the goal that this linkage will be generalizable to other agencies serving youth in San Francisco.  The goals for this proposal are to 1) assess similarities and differences in existing data structures used by DCYF and SFUSD, 2) address confidentiality issues related to establishing a linked data system in San Francisco, and 3) select and test linkage variables that will uniquely identify individual level information across DCYF and SFUSD datasets. Dr. Kaja LeWinn, a social epidemiologist in the Department of Psychiatry, will play a key role in this effort. Dr. LeWinn has spent the last two years working closely with SFUSD, has developed an intimate knowledge of the school-based services provided by DCYF and existing tracking systems, and has established relationships with key partners.  To accomplish our goals, our group with meet every other month for the duration of the award and Dr. LeWinn will meet individually with specific members as necessary.

Criteria and Metrics of Successes: At the end of this pilot project year, we will have developed a reliable linkage between the DCYF and SFUSD data systems, and submitted an application for grant support to further develop and implement this system.

Cost: We ask for $20,000 to provide salary support for Dr. LeWinn so she may dedicate research time to further developing this project and relationships with key public sector collaborators.

Collaborators: Our collaboration is united around the common vision that physical and mental health disparities begin early in life, and that a longitudinal, linked data system that tracks the usage and effectiveness of publically available programs will be an instrumental resource for both researchers and public policy makers. Key public sector collaborators and supporters include: the Mayor’s Office (represented by Hydra Mendoza); Richard Carranza, Deputy Superintendent for Instruction, Innovation and Social Justice, SFUSD; Maria Su, Director of the Department of Children Youth and Their Families; and Ritu Khanna, Assistant Superintendent, Research Planning and Accountability, SFUSD. Dr. LeWinn will play a key role in maintaining this partnership in collaboration with Orlando Elizondo, Director of the SFUSD/UCSF Partnership.  

Comments

Thrive in 5 (http://thrivein5boston.org/) is a similar collaborative initiative in Boston that seeks to unite families, early education and care providers, health care providers, and community organizations to make sure children are "ready" for school. Collaboration is organized around 4 sectors: Ready Families, Ready Educators, Ready Systems, Ready City and some of the indicators they chose to measure progress may be useful for you later in your research. A wonderful project!

Rebecca, thank you for directing me to Thrive in 5! It looks like a great project and one that I’d be happy to learn from as we move forward.

What a great idea. If I am reading this correctly, there are some similarities to a system that CTSI has just been working on -- in terms of developing data linkage between various systems for the purposes of reporting, tracking, evaluating, and also creating a framework by which further data sources can be added. If phase 1 includes a true reliable linkage, as the foundation, will you need more funds for some technical expertise such as a database architect / analyst? These roles were critical in the recent CTSI project.

Leslie, yes, there a database architect at DPH that we would be working with to complete this project. In fact, a very similar project is already underway but for a much smaller subset of San Francisco’s population. We are hoping to build on that experience and broaden the scope of the data linkage. Thanks for your comment!

Kaja Your proposal targets critical areas of childhood needs which can be applied to any city and county community as a model program. The CTSI CRS has an extensive list of studies currently underway and works closely with the national NCATS Children's Health Oversight Committee to monitor and improve on the diverse spectrum of health care needs for children. It would be a great collaboration to work with you to identify commonalities and areas of collaboration where we could help contribute to your nice proposal

Mario, thank you for directing me to the NCATS oversight committee. I would love to collaborate and learn more about what you are doing!

Creating integrated data bases is a worthwhile goal, and this is an example where UCSF expertise in data management and analysis can be of added value to SFUSD and DCYF. I would be interested in some specific ideas of how a more integrated data system would actually be used operationally for program improvement and community engagement research that could inform program delivery and community needs assessment. I recognize that this is not an intervention study, but would be good to see where this would ultimately lead in practical terms. Also, I would encourage the folks involved in this proposal to consider how it might align with the broader UCSF-DPH-SFUSD-other agency collaborative efforts being conducted under the auspices of the CTSI SF Health Improvement Partnerships initiative (www.sfhip.org).

I would be very happy to integrate the goals of this proposal with the larger efforts of SF-HIP. Thus far, I have realized that establishing these linked data systems are often occurring simultaneously across different agencies. I think UCSF can help in unifying those efforts. At the moment, I have been encouraging my partners to think about what they would like to learn from this type of system to promote commitment to the project, but I agree that a larger vision for what this type of data linkage might yield from a practical perspective is necessary. I hope that we can meet to discuss this further!

Kaja, Past and current resources for the CTSA Health Oversight Committee (CHOC)can be found at: https://www.ctsacentral.org/committee/ctsa-consortium-child-health-overs.... If you are interested in our past reports submitted by the UCSF/CTSI-PCRC I would be glad to share them, although they are clinical research focused. Best of luck with your proposal and efforts. Mario

Commenting is closed.

Mobile Study Support Pilot Project

Type: 
Proposal Status: 

Rationale: UCSF and the Benioff Children's Hospital have been selected by the national Children's Oncology Group Consortium (COG) of the National Cancer Institute (NCI) to conduct new Phase I Pediatric Oncology studies. Patients enrolled in these studies who are not admitted to the Pediatric Clinical Research Center (PCRC) Inpatient Unit, will require evening and weekend support for study procedures, including timed blood draws, sample processing and EKGs. We propose a “Mobile Study Support” (MSS) Pilot, under the CTSI/Clinical Research Services-PCRC to improve the conduct of translational research. MSS services would assist in providing research procedural care to oncology inpatients participating in clinical trials on the 7-Long Oncology Floor, as well as other UCSF investigator studies. We anticipate the MSS model will become widely utilized as we transition to our new pediatric hospital at Mission Bay.

Plan: A PCRC research nurse, 60% FTE, will become an Administrative Nurse (AN) I and will increase her effort to 90%, to include MSS nursing, PCRC outpatient unit clinical care, and administrative duties. The nurse in this position will be familiar with related study procedures, since procedures for Oncology protocols to be supported, mirror the 70 current active research studies in the CTSI/CRS-PCRC. Immediate benefits of this pilot are continuity with knowledge of investigator studies and teams, clinical research nursing expertise to these teams, and expert care to patients participating in these phase I trials. The new AN I position of the MSS PIlot is an innovative role that will add value to the growing demand for pediatric Outpatient and mobile visits. Because this position contains an administrative component, the hours will be flexible to meet the demands of the clinical care and administrative needs without requiring to pay for on call time for mobile visits. The MSS pilot design will dedicate maximal effort to protocols approved under the COG Consortium while accommodating variability in protocol volume as this new service line grows. Sustainability will be measured by the increase in investigators supported and the expansion of site locations beyond the Benioff oncology unit on 7 Long. (see collaborators below). Six active Oncology studies are requesting immediate support: ADVL0912, ADVL1112, ADVL1013, ADVL1011, ADVL0921, NANT0702. Three additional, new leukemia phase 1 studies in various stages of development are also strong candidates: AALL1121, ADVL1114, TACL0903.

Criteria and metrics for success: Measures of success will include number of studies, number of patient locations, number of patients served, patient contact hours, investigator satisfaction with nursing and study support, and cost per patient vs. alternative costs (such as admission to PCRC beds). Challenges and barriers to success will be presented to the CTSI CRS leadership to improve the pilot design. Funding contributions to support this pilot will be sought from investigator grants to ensure the pilot's success and sustainability. Alignment with CTSI Strategic Goals includes responsiveness to; 1) "CRS-without-walls" model to increasingly support investigators outside the current eight CRS sites, 2) Creating cross-cutting initiatives with consortium partners, thus increasing new and potential collaborations to accelerate research, 3) Work with CTSI's Program Evaluation Unit to apply dashboard/metrics methods to monitor and report on progress. This MSS Pilot can also serve as a model for the national COG-NCI group, other CTSA sites, and is in alignment with the model of the National Center for Advancing Translational Sciences (NCATS/NIH).

Cost and sustainability: Administrative Nurse I salary and benefits at 60% are as follows:

$147,394 (salary base) x 60% (effort) = $88,438 (salary) + $20,438 (benefits) = $108,876. $100K is requested for this pilot. After initial start-up cost, cost recovery is anticipated for services to offset 20-30% costs in YR1, with further increases in out years. As MSS demand increases, service providers other than RNs may be included to perform research procedures not requiring RNs, such as performing EKGs and processing specimens during non-business hours.

Collaborators: NCI Children’s Oncology Group (UCSF investigators Drs. Kate Matthay, Steven DuBois, Robert Goldsby, and others), UCSF Intensive Care Unit Research Team, UCSF Cancer Center, UCSF Benioff Children’s Hospital at Mission Bay.

Comments

A much-needed service that will enhance our ability to provide complex phase 1 therapies to complex patients with relapsed cancer. Steven DuBois

Steven The CTSI-CRS PCRC looks forward to supporting your unique research needs at UCSF via our innovative new model. We are delighted to have you be our partner in this proposal. Thanks again

This seems like something very useful to pilot--I think you could highlight the potential for other services (beyond oncology, on both the adult and pediatric sides) to use what is learned from piloting this service to potentially adapt it for other types of studies that take place in a variety of settings in the medical center, such as the infusion centers in ACC and the ED. Probably oncology studies do not have unscheduled study visits, but it would be interesting to think about how this model could be adapted to studies of treatments for conditions that do happen on an unscheduled basis (e.g. status epileptics).

Amy Thank your for your very helpful comments. This "Mobile Study Support Pilot" falls under the CTSI-CRS "scatter model" initiative to extend CRS services to investigators and research participants outside the confines (and constraints) of a traditional discrete inpatient/outpatient hospital-based unit. With an average of 300 active CRS studies per year across UCSF sites and the EBay, and 59 supported disciplines, the success of the MMS pilot will allow the CRS to support researchers across the spectrum of research and in diverse settings. Your request to respond to treatements occurring on an unsheduled basis is a great suggestion. Thanks again

I do not think we should lock ourselves into stating what percentage will be for each role. Demands from each area will determine needs.The way it is written the AN I will have to be available 24/7 for MSS which will be a limiting factor in hiring.

Excellent point. Because the prospect of an increase from 6 to 9 COG funded studies is likely, the MSS suppport will likely increase as well and should not be restricted by functional roles. Thanks

This proposal not only advances the goals outlined in the proposal, it brings the added benefit of creating another significant point of partnership with the HDF Comprehensive Cancer Center. The UCSF CTSI and Cancer Center relationship has attracted the attention of the NIH program officers and appears to be in alignment with the new NCATS model. The success of the Phase I unit at Mount Zion has demonstrated that the arrangement has mutual benefits for the two organizations and the opportunity to greatly improve the patient experience. Providing standard of care and research services in one location promotes a patient focused, cost-effective approach to providing services that makes the unit attractive to industry sponsors. We have seen increased patient enrollment at Mount Zion and have every reason to believe that this family friendly model can have the same impact for pediatric studies at UCSF.

UCSF is one of 17 sites in California (of a 220 national/international consortium) selected by the Children's Oncology group (COG) to conduct this unique research. The COG explores the entire clinical-translation research spectrum of childhood cancers, from the molecular basis of disease, through phase 1 to 3 clinical trials, biomarker discovery, extending into survivorship issues, but also the critically important challenges that pediatric cancer patients confront, including neurobehavioral consequences of disease and therapy. Support for this CRS-PCRC pilot will further translate into advnaces in these key areas of research as well as for patients and families.

This is a cutting edge proposal developed by Dr. Pock, the medical director of the PCRU CRS and organized by Mario Moreno. This is a logical extension of our "scatter model" and represents a bold new collaboration with the Cancer Center. The return-on-investment will be high in terms of efficiency and revenue generation.

Thank you for the insightful commentary. This collaborative initiative with the Cancer Center also clearly aligns with UCSF Chancellor's key mission, 1) Provide unparalleled care to our patients, 2) Improve health through innovative science, 3) Be the workplace of choice for diverse, top-tier talent, 4) Create a financially sustainable enterprise-wide business model.

Commenting is closed.

“Idea to Impact” (i2i): Translational Digital Health Program

Type: 
Proposal Status: 

Rationale: The rapidly evolving field of digital health has great potential to enhance biomedical research, education and clinical care. Development in this new space is largely being driven by the tech sector, where projects may lack proper clinical focus or scientific rigor and generally do not include health outcome measures to assess effectiveness or impact. UCSF can play a vital role in ensuring that digital health ideas are appropriately targeted to real clinical problems and that they result in meaningful impact, by improving health, improving care, and lowering costs. We propose the i2i Translational Digital Health Program to foster UCSF and industry collaborations to initiate, develop, and test impactful and cost-effective digital technology to improve biomedical research, education, and clinical care.

Plan: This i2i proposal is to identify and develop the critical resources and programs necessary to facilitate and accelerate the “idea to impact” trajectory for digital health innovators both within and outside UCSF, in a multidisciplinary way that combines sound scientific methods with technology, design, business strategy, fundraising, IP, regulatory and marketing expertise. This first year of i2i will focus on the following:

  1. Establish a focal point for CTSI Digital Health initiatives, with multidisciplinary internal/external partners
    • Create an intersection for collaboration between UCSF Digital Health initiatives (e.g., BMI, ISU, QB3, ITA, SOM, Institute for Reducing Health Care Costs and other innovative efforts at UCSF).
    • Create strong UCSF presence in Digital Health to external partners, working with Virtual Home on a website, with campus on social media, with Development officer, etc.
    • Explore and establish external partnerships (e.g., UCs, CITRIS, investors, companies, public health).
    • Host Annual UCSF Digital Health Summit: to showcase “ideas” and “impacts,” convene partners
  2. Identify and implement needed processes and resources for accelerating “idea to impact”for the main types of digital health projects, e.g., projects aiming for a) internal (e.g., RAP) and external grants; b) operational deployment to improve health locally to globally; and c) commercialization. We will:
    • Convene a series of meetings with faculty/trainees, staff, and various external partners (e.g., investors, tech companies, start-up incubators, design firms, etc.) to define hurdles, needs, objectives
    • Work with ITA and other groups to define models for supporting each type of project, and implement initial steps with FAQs, standard agreements, etc. as appropriate
  3. Build a vibrant and supportive Digital Health i2i community
    • Establish an external Digital Health consultant panel, modeled after T1 Catalyst, to serve as consultants to UCSF, grant reviewers on RAP proposals, i2i Advisors, etc., with networking events
    • Establish an internal consultant panel of research, clinical, education, technology, and informatics experts to serve as consultants grant reviewers, Services, i2i Advisors
    • i2i Web presence - with Virtual Home, enhance UCSF Profiles (digital health keywords), Digital Heath Innovators Forum, links to useful collaboration tools, survey needed resources, crowdsourcing i2i tips
    • Quarterly or bi-monthly multidisciplinary “Eye to Eye” roundtables featuring specific research or clinical “problems” and potential digital “solutions,” presented to an audience of External and Internal Panel members, to foster relationships and potentially ignite Digital Health projects.
  4. Define and implement recharge and IP models for internal and external consultation We will work with Consultation Services, ETR, ITA, etc. to adapt and develop processes for UCSF faculty consulting to industry, for industry consulting to faculty/staff, and for staff (e.g., ISU, ITA) consulting to faculty

Criteria and Metrics of Success: digital health website with communications plan and metrics; defined internal pathway/processes for idea transfer; implemented recharge mechanism for Consultation Services; 20 external consultants panel members, from range of disciplines and public/private; 15 internal consultants panel members; Annual Summit, 4 “Eye to Eye” events; 2 new external partnerships

Cost and justification: Staff support 0.4 FTE $40,000K; website and communications: $20K; Symposium and “Eye to Eye” roundtables: $10,000 with sponsorships to defray; external consultants panel $5,000.

Total: $75,000. The sustainability model will include recharge, shared royalty/licensing programs, partnerships with clinical enterprise/external groups, UCSF Digital Health Consulting Panel (to external parties), and philanthropy, with self-sustainability anticipated in 3 years.

Collaborators: Sim/Sawyer (BMI), Lee (ETR), Pletcher (CS), Yuan (Virtual Home), Lium (ITA), Melese (SOM), Crawford (QB3), Jorgenson (ISU)

Comments

I can see how this project would decrease the amount of duplication that is going on in the creation of validation models for digital health at UCSF. It would be great to be able to find others working on similar projects in the digital sphere to share successes and failures. I really believe that you are right in wanting to push more health into digital rather than the other way around which is the current trend... Will the forum and the enhanced profiles be able to connect digital innovators in different fields or even schools (i.e. dentistry and nursing). Would a registry of projects be a more robust way of connecting digital health innovators?

Hi Jesse, the "i2i" (Idea to Impact) Translational Digital Health proposal is designed to address your comments on the potential benefits of interaction between UCSF Digital Health Innovators. Our proposal includes a mechanism for that and also will provide other necessary resources to foster sound ideas and move them all the way to meaningful impact. This program is designed to ensure the intersection of innovators with each other but also with an Internal and External multidisciplinary panel of consultants who have the expertise needed to initiate, develop,and ensure successful utility of digital health innovations arising at UCSF. We do plan to include resources to address those you specified: 1. "find others working on similar projects in the digital sphere to share successes and failures" 2. "to connect digital innovators in different fields or even schools (i.e. dentistry and nursing). 3. The plan to create a Community of Digital Health Innovators at UCSF to optimize the sharing of resources and expertise (internal and external)and will certainly include creating a "registry of projects". The Idea to Impact (i2i) proposal is aimed at providing "fertile soil" for the fostering of sound ideas and facilitating the development and implementation of them. Your input on ways to better support innovators in digital health at UCSF are greatly appreciated. AJS

I think that looking at Digital Health from an overall "End-to-End" perspective is a fantastic idea. I worked on a similar project called "Concept-To-Product" (C2P), and the end result became a central tool to perform both strategic planning and operations improvements. There are many functions involved in this process, are you planning on delivering a basic process map to understand the key high level steps, R&R, and rough FTE & timing of activities needed to deliver a Digital Health product? A process map could be particulary helpful in establishing a common base for communcating about i2i. I wasn't sure if that is what you meant by "models" in the 2nd bullet of point #2. Cheers!

Hi Fabrice, your comments are very helpful. Yes we do plan to build "process maps" and your expertise in this area would be really valuable. I also plan to explore the possible need for somewhat different paths which may have slightly different process maps or trajectories to development. For example some problems wth their paired digital solution might be better suited for commercialization while others might be best suited for public/global health applications or internal clinical/research use. There is likely to be cross over between these but it appears that the path to development (and all the way to "impact") can vary depending on the ultimate target. I would be very interested in your input as we develop the "Idea to Impact" program and build process maps. Thank you for you comments. Aenor

Hi Ida - Consultation Services is happy to help support implementation of a new recharge mechanism for i2i. Can you clarify how would this be different than the recharge we have already for our mHealth "niche" consultants in the Design Unit? Are you talking about providing other types of services (including development/programming/other technical services)?

Thanks, Mark. The recharge would be different from what we already have for the following kinds of consultations: 1) a faculty member providing a consultation to an external group (e.g., a company) -- there are arguments for and against an institutional structure for what has been a private 1-on-1 arrangement; 2) UCSF members seeking consultation with members of the external Digital Health Consultants Panel. And yes, in mHealth Consultation Services, we are already referring people to developers (eg ISU). There are other internal and external developer/design services we may want to refer to, and need to establish scope, policies, etc. for that.

Commenting is closed.

A Digital Clinical Research Center (dCRC) for Translation of Digital Health Interventions

Type: 
Proposal Status: 

Rationale: The confluence of online social media, smart phones, and sensor technology is giving rise to a tidal wave of digital health interventions that have vast potential for improving health at low cost. As with consumer software, digital interventions are more likely to be effective if user feedback and determination of effectiveness is sought early and often. At present, however, researchers face high costs and difficulty in developing app ideas into prototypes, testing prototypes on real users, and validating the intervention. These difficulties are a significant barrier to securing extramural funds, and to the translation of digital health technology into population health. The challenge the dCRC addresses is to reduce the expense, time, and expertise required to prototype and validate digital health interventions.

Plan: The long-term goal of the dCRC is to support the full range of digital health intervention research, from early prototype development to pilots to large scale RCTs. In this first year, we will focus on a large class of projects that we are seeing in CS, BMI, and CRC: projects that want to 1) collect and analyze sensor-derived actigraphy and geolocation data, and 2) leverage online social networks (from Facebook, in particular) for recruitment and for delivery of novel network-related interventions. These project needs require new methods for data collection, analysis, visualization and feedback to patients, and confront investigators with a variety of barriers (technical, design, ethical). It is inefficient for each team to pull together the multidisciplinary expertise needed to surmount these barriers. To meet this challenge, we propose to create a Digital Clinical Research Center (dCRC), housed within CRS in collaboration with BMI, CS, and ETR, that will support development and translation of novel digital health interventions by:

  1. Providing re-usable software modules for data collection and analysis. Through a combination of UCSF-led development and access to open software from Open mHealth (Co-Founder Sim), we will provide researchers with standard modules for collection, analysis, and visualization of:
    • Actigraphy and geolocation data collected via smartphone and other sensor devices
    • Online social network data from Facebook.com collected via “Facebook Connect”. Modules will facilitate feedback/visualization of intra- and inter-network health comparisons. (Based on prior work by The Social Heart Study, PIs Pletcher and Fowler]).
    • An initial set of self-reported demographics and health-related behaviors including exercise, diet and sleep collected via mobile devices. Our efforts will align with and contribute to NIH’s PROMIS and PhenX projects, which aim to standardize clinical research data collection methods.
  2. Recruit a standing dCRC cohort interested in digital health interventions. PRS will lead online recruitment of at least 2,000 broadly representative participants interested in testing new health technology and undergoing some or all of the measurements detailed above.
  3. Develop CRC-type procedures for using the dCRC cohort. Using standard CRC’s as a model, we will develop an application, review process, protocol development support (via CS) and a recharge for UCSF and outside investigators (including industry) interested in using the dCRC cohort for early (pilot) and late stage (validation) testing of digital health interventions.
  4. Develop ETR-type resources for commercializing validated digital health interventions. ETR will develop matchmaking, IP consultation and other resources to support academic investigators interested in licensing or otherwise commercializing their products.

The dCRC will drastically reduce the marginal costs and expertise required to translate good digital health intervention ideas into products that improve population health. This service is highly relevant to industrial interests, and we expect that revenue generated from allowing industrial access to the dCRC will eventually support core services and highly subsidize costs for UCSF and other academic investigators.

Criteria and Metrics of Success: 5000 participants recruited (500 by end of Year 1); 10 new re-usable data collection/processing modules, in addition to modules available through Open mHealth; 5 projects using dCRC modules in their technology; 2 projects using the dCRC cohort; a recharge mechanism and a dCRC application and review process in place.

Cost and justification: $40K in programming costs, $10K for recruitment, $50,000 for 0.5 staff FTE = $100,000 total. We expect initial recharged projects early in FY08, and self-sustainability within 3 years.

Collaborators: I. Sim (BMI, Open mHealth); M. Pletcher (CS); B. Balke, N. Nasser (CRS); J. Lee (ETR); J Fowler (UCSD); J. Olgin (UCSF Chief of Cardiology); and T. Parsnick (SF Coordinating Center)

Comments

Excellent idea ! Although, it would be interesting to know what kind of data/research you have to supporting the need that "researchers face high costs and difficulty in developing app ideas into prototypes, testing prototypes on real users, and validating the intervention" ? Having said that, from personal experience, friction has mainly been in identifying technically proficient collaborators to develop the applications/technology. It would be great if eventually dCRC assists with that aspect as well which is alluded to in the Plan of the proposed project. Lastly, I think it would be useful to ultimately consider incentivizing the user experience to motivate faster and lasting adaption.

Hi Deepak, the "i2i" (Idea to Impact) Translational Digital Health proposal (see below) is designed to address your 2 comments on the need for assisting UCSF Digital Health Innovators, including those you specified: 1. "identifying technically proficient collaborators to develop the applications/technology" 2. "incentivizing the user experience to motivate faster and lasting adaption" The Idea to Impact (i2i) proposal is aimed at fostering the formation of sound ideas and facilitating the development of them. Your input on ways to better support innovators in digital health at UCSF are greatly appreciated. AJS

Together with Jeff Jorgenson at ISU, we surveyed the UCSF community in August 2010. At that time, 49% of 537 respondents strongly agreed that they would be interested in using mobile programming and consulting services if they were offered at UCSF. The free-form comments expressed needs for developing, testing, and validating apps. As part of my work in Open mHealth, I also hear about these needs from academia and industry alike, although that is anecdotal. Please see the i2i open proposal for other UCSF activities we propose fo helping with the need to find technically proficient collaborators (http://accelerate.ucsf.edu/forums/improveresearch/idea/7053). We welcome your suggestions on that proposal too!

Regarding your last point about incentivizing the user experience, we are VERY interested in "gamifying" research participation as you describe in your Open Proposal idea, and would love to work together to create easily-adaptable modules that research studies (and the dCRC in general) could use to keep participants engaged.

Hello Ida, I'm really interested in being one of your charter projects for the dCRC. I have an industry partner and a clinical site ready to create a validation project for a HIPAA-compliant physician-to-physician communication app. The project would be an extension of an mHealth Research Grant application that I just submitted. Please email me off-line if you would like more details (shantzj@orthosurg.ucsf.edu). I think this is a great cross-organization idea.

Thanks for your interest -- if we get funded, we will certainly be looking for charter projects. We will issue a call to the UCSF community for appropriate projects when the time comes.

Ida also works with Rock Health, an incubator based in SF Chinatown who provides support to IT entrepreneurs. She referred a group who developed a smartphone app aimed at reducing complications after hip and knee replacement. Our proposal was an initial selection for RAP mobile health. We will use the integrated claims repository to track outcomes.

Thanks, Steve. Over time, we will work with CTSI's Patient Recruitment Service to build more connections and overlap between the dCRC cohort and UCSF patients, and through BMI to make it easier to access patient-specific data in IDR and elsewhere.

Ida. This sounds great. Ambitious in a good way. It would be good to know what's already happening at UCSF in developing these tools. One of the other proposals was to encourage the development of new apps and that could interface with yours I would guess. I'd also wonder if all the public discussion of confidentiality concerns with Google te al is being addressed in this. If Facebook is being used, does the potential participant/subject have to disclose disease-related information?

- I'll let Ida respond to the first part of the comment. - Regarding confidentiality concerns: This is a big deal for studies using online social network data, both in terms of perception and reality. Even when study data are NOT being shared with Facebook/Google/etc and study materials state this explicitly, we've received feedback from pilot studies that potential participants are really scared off by confidentiality concerns. We'll address this head on with our recruitment efforts with the intrepid support of the Participant Recruitment Service! Commercial entities are successful in overcoming this barrier, and so can we. There are also real concerns when it comes to sharing information about your own social network. For example, information about your Friends is inevitably shared with the research study without their explicit consent. We've worked out a good compromise with CHR for our pilot and an open collaboration with the CHR, and should be able to navigate these waters.

UCLA has a Mobile Web Framework which allows for quick deployment of apps that run on web browsers over cell phones. Individual projects are also developing some of these tools, but not in a reusable way across projects. Several of the other Open Proposals would certainly interface with this one. The dCRC would help to bring a degree of technical cohesion across these projects as they are deployed across UCSF, and would also provide tools and expertise for validating these projects. The Open Proposal on research networking would not interface technically with the dCRC Facebook does not use OpenSocial while we propose to focus initially on support for collecting Facebook social network data.

Terrific idea and one in which CRS could potentially collaborate and expedite.

This is a great idea! Targeted social media networks and mHealth applications are effective and inexpensive ways to recruit patients and collect data. While the web is an effective means for collecting data, researchers find themselves off-site without internet access. 3G and 4G networks available through iphones and ipads provide an opportunity to collect information when internet access is not available. Unfortunately, the appearance of websites that have not been adapted for mobile devices can be difficult to use on these devices. Usable data collection/processing models on mobile devices will reduce research costs. Expanding access to and validating digital interventions will have a long term impact on research.

I have always been supportive of this effort as I think it is very far-sighted in its goals. Certainly the concept of Open mHealth providing re-usable tools is a very noble and egalitarian one. I think perhaps it has had too broad a scope to get the traction it deserves at UCSF. I really would like to see more UCSF investment in this area. I would think that a very focused pilot using these assets followed up by applying them to a strategic area within UCSF would be a worthy undertaking. This is an enterprise concept and finding support for such broad ideas seems very difficult. I'd like to hear how others overcame this hurdle.

Commenting is closed.

IT Improvements Needed to Make APEX useful for Clinical Research

Type: 
Proposal Status: 

Rationale: The APEX system has been promoted to the faculty as providing a new and extremely valuable means of doing clinical research, especially for discovering previously unappreciated clinical associations such as the relationship of kidney stone and myocardial infarction, etc..  Such associations would be very valuable in discovering underlying mechanisms of disease.  However, the APEX system falls very short of fulfilling this promise, because it has been primarily designed for maintaining information for medical practice, and for billing.  The problem is that it forces the physician to use vague descriptors of disease that are so imprecise as to be of no value whatever in discovery of associations.  For example in our Lipid Clinic we have discovered a form of dyslipidemia, cholesterol 7 alpha hydroxylase deficiency, that we believe afflicts about 100,000 Americans.  The APEX system refuses to allow us to enter this diagnosis for search purposes, insisting on “cholesterol problem” as a substitute.  Because there are probably fifty individual diagnoses that could fall under this heading, all value to translational research is lost.

Also there are data from emerging biomarkers, etc. that it will not accept for search processes. An example is the biomarker, prebeta-1 HDL, also discovered by our group, which is emerging as a robust indicator of risk of myocardial infarction. We have measured this biomarker on nearly four thousand patients.  We should be able to enter it  with  the restriction that it cannot be used for clinical decision making, but allowing faculty to access its value for clinical research.

An academician from the Cleveland Clinic, lecturing here recently, stated they also found APEX nearly useless for research, leading them to re-engineer the IT to overcome these issues, making detailed information searchable.  This was done carefully within the envelope of APEX without altering its function for clinical practice, billing, etc.

Plan: We would bring in expert IT personnel to make the changes in APEX, fitting it to our particular needs and system characteristics at UCSF.  We would support part-time effort of a clinical faculty member to supervise the entry of appropriate descriptors, following the experience at the Cleveland Clinic.

Criteria and Metric:

We can test the system for association of descriptors in widely diverse information fields.

Approximate cost:

Expert IT consultation and engineering $40K

Faculty time for descriptors: $ 15K

Collaborators:  Faculty from diverse disciplines would be welcomed to bring precision to the descriptors in their respective fields.

Comments

Great idea to leverage APEX for research. It would be helpful if you could articulate in more detail, how you plan to bring in faculty from diverse disciplines to participate in this effort. Another barrier in conducting clinic-based interventions is that APEX does not allow integration of external data (e.g. those generated from patient-facing technologies/apps) into the system to facilitate/improve health care. Could this be addressed as well?

I think this would be an opportunity for any UCSF contributor to participate. Surely individuals in diverse fields will have much more refined concepts of clinical syndromes and perhaps newly defined disorders. The more participation, the better this will be able to serve the translational investigation community. The problem with data mining approaches is that the ICD-9 codes and other restricted lists are just that. They do not keep up with the progress of discovery. There has to be a means of putting searchable descriptors that have scientific meaning into the system if it is going to be of use in translational research at an institution like ours.I think the collective wisdom of investigators in the diverse fields at UCSF can make this a valuable resource for discovering many relationships among disease states and their clinical and laboratory manifestations that have escaped discovery to date.

I could not agree more with the limitations of Apex in terms of coding of diseases and phenotypes. I share the same concern expressed by Kathleen, however, that it will take an enormous effort to modify the system in a way that solves this across all diseases. Are you sure that $45K would do this?

Sounds like a great idea, but I am on the same page as Dan. For such a large upgrade to the system, from a software perspective, it doesn't seem like $45K is enough money to get it done. In terms of coding versus mining text fields, there are no other controlled vocabularies to use in such an effort? While this too would be daunting, you get so much more from using an established ontology - from synonyms to related terms.

John and Mini, This project sounds like a precursor to making APEX amenable to more detailed diagnoses that will be available with ICD-10 coding. If this is the case, one potential method for improving data capture from APeX might be to instead focus on the hospital and physician billing data, which is much more uniform in format than unstructured notes. I am very interested in participating in the implementation of MyChart. This is the patient portal where it is envisioned physicians will be able to have patients fill in surveys prior to their visit, and together, review these data in the clinic. In the past, the distribution of surveys to obtain SF12, pain and disability scores were considered "research". The goal is to routinely distributed these surveys to patients. Integrate the results into their charts, and use these data to facilitate patient/physician discussion regarding the course of care

Great idea! You have articulated a couple of concerns with APEX and possible solutions. Are there any other concerns (in addition to the ones stated above) with using APEX for research and would re-engineering the system tackle all of the issues?

I heard a comment at the Dean's retreat from the keynote speaker that (as I heard it) said that getting research data from the form entry in EMR's was notoriously unreliable and advocated for mining the text fiels as a better alternative. As the VA EMR is mostly text, this was encouraging. The last thing I'd do is to add yet more terms on top of the thousands already in Apex!

Agree this proposal represents an important vision for how the Apex project should be keeping research efforts in mind while being implemented. While the priority of this issue is high, the scope of the project described is extremely large. Even if narrowed substantially, it will be hard to carry forward in the next 6-9 months given priority placed on implementing Apex in a clinically safe and efficient manner. Whether large or small in scale, this work will require integration with Apex analysts, the IDR working group, and would probably would require more than the small amount of funds requested.

Adapting EMRs to improve their utility for research is a really key area not only for UCSF but across other academic medical centers. Couldn't agree more, and I'm glad you're highlighting this. Since reading the submission, I've been learning more about status of APEX here at UCSF, and it seems like there's actually good momentum in integrating the research mission with the medctr rollout. In particular, APEX does apparently have a richer vocabulary for use than what may be viewable right now - its just going to take time to roll these pieces out carefully. Of course, I'm sure any progress will seem too slow for those involved in research, but I fear there may not be much room for quick fixes here. Here's a related proposal - how about we propose bringing in academics who've seen good examples of EMRs adapted for research (like the colleague you mention) - exposing ourselves to their progress may help accelerate our own interventions.

One advantage of electronic medical records is "easy" access to data for research. I think it’s important to make that access a priority and a reality at UCSF. Departments across campus will have similar needs to this one. Is it possible that IT modules from EPIC already exist that can help capture data for research purposes? What can we learn from other research institutions that have adopted EPIC?

The comments on this post reflect the intense interest in access to APEX that we have seen on the mHealth side as well. The need to leverage APEX for clinical research purposes outlined by this proposal is well thought out and very worthy of consideration. It also highlights a bigger issue. How does UCSF best leverage the tremendous investment UCSF Med Ctr has made and the tremendous wealth of information and information flow that APEX provides? This is the start of a continuous flow of ideas all competing for finite resources on how to best use APEX. At Apple the yearly planning cycle would include dozens upon dozens of proposals for each major system. All of them were very good ideas. The proposal of the good idea itself was not enough for the proposal to be approved. It also needed to include the value and the return on investment (yes the dreaded ROI) of each proposal. I think all APEX proposals will need to provide the epxected value derived for UCSF. They need not all be monetary - though lets face it in this climate those should get looked at first.

This is an incredibly important project; Apex was implemented as a means to optimize patient care and billing, not for research. In an academic institution such as ours, Apex needs to be able to keep up to date with the discoveries. This is a problem that will be easier to tackle at these early stages, as Apex is being implemented. From here on, we can move forward and build this tool for the future. This proposal addresses this fundamental need at this early stage. For sure the academic community will welcome this initiative - it will help our institution provide a valuable tool to the research community. Apex has the potential to become a valuable source of information, that is able to provide accurate diagnosis for our patients as new discoveries are made. It will not only improve research, but patient care as a whole. We should be able to learn from other insitutions, such as the Cleveland Clinic and implement our own valuable, up-to-date database. This project can become large, but early intevention in the process, even at a small scale, is a priority. From then on, we can scale it up to fit the needs of the research community and patient care of the whole institution.

It's clear that, in an institution like UCSF, there are many clinical insights that could provide the basis for important new discoveries in the interrelationships of clinical observation and laboratory findings. However, there must be a means of getting these academic observations of emerging syndromes and sophisticated new biomarkers into the clinical database in a searchable format. Clearly, the Apex system was designed for efficient clinic operation and billing and, in its present state, is totally unserviceable with respect to supporting translational discovery. The experience at the Cleveland Clinic indicates that a sophisticated, searchable database can be created within the envelope of Apex, leaving its utility to medical practice and billing intact. We should draw upon their experience and that of other academic centers but develop a system adapted to our own clinical practices. This is an opportunity for all UCSF academicians to contribute their individual insights to this endeavor. An example of the kind of discovery this could engender is the recent discovery of a relationship between certain aspects of HDL metabolism and ovarian cancer. There's no way to make a cost/benefit analysis of this proposal as one would in a clinical practice. How would one evaluate the monetary value of a new insight in the etiology of cancer? Surely, in the long run, this would bring in research grant support, etc. but the value to medicine and humanity is paramount. This is an endeavor in keeping with the caliber of clinical investigation at UCSF.

This is a very interesting proposal as the need is shared amongst anyone at UCSF doing clinical research. APEX or EPIC was designed to serve a particular set of use cases, clinical EMR related for instance. As such instead of trying to make that system research friendly, it may be worth considering getting the data from APEX and merging it with other sources within a system designed for research use cases. An example would be what is being done here: https://i2b2.cchmc.org/faq The research data-warehouse approach does not address the immediate need to support better annotation of diagnostic information within APEX, a need specified in this proposal. However it would be a more scalable solution for generalized research data storage and mining. As a disclaimer, I am not an expert on either APEX or i2b2 (it is installed at UCSF, per).

I read with interest the comments about how APeX was implemented to make billing easier, though it is not clear how to promote access to billing and APeX/UCare data. Though many colleagues remain skeptical that diagnosis and procedure coding is sufficiently accurate for informing patient-centered research, most UCSF researchers lack sufficient data access to assess this notion. There are 3 sources of claims that should be considered: Hospital (including purchasing data with itemized bills for every consumable), campus (many departments submit bills for professional services - CPT codes), and the admit-transfer-discharge (ADT) used to distinguish and communicate "present on admission" and emergent conditions. I believe cohort discovery is using ADT data. These data are in standard formats. Barrier to gain access remains because of the perceived distinction between the medical center "quality of care" and campus "research" focus. It will be difficult to build clinical research platforms by pulling data from APeX. But not many researchers have considered the existing administrative claims data streams that are ready to be mined.

APEX as now constituted is very unfriendly to the entrance of the precise diagnosis necessary for support of clinical research. The proposed effort promises to remedy this deficiency. We need a module that with concordance will accept new descriptors.

This is a proposal that has my full support. I have been using the limited abilities of STOR for twenty years in my research and am in the process of transitioning to APEX. Now is the time for us to make sure that we have a much better system established for the future at UCSF. I think this proposal is very timely, and what is suggested a necessary undertaking. I believe that the comments about mining billing data and the current clinical descriptors in APEX misses the point of this proposal. The problem is that the informative descriptors i.e. newly emerging syndromes, biomarkers etc do not exist in APEX and cannot be inserted in a searchable format. It will be possible to insert an internal system for recoding these valuable data in a searchable format within APEX based on the Cleveland Clinic experience.

Is there an open source APeX community? How can we be sure that in the future, we and our colleagues do not have to license from EPIC, technologies we develop to make APeX data available for research?

Commenting is closed.

Promoting Translational Research in the Medical Device and Tissue Engineering Fields

Type: 
Proposal Status: 

Comments

In principle, this is an absolutely terrific initiative. Similar strategies have been very successful at other institutions. To be maximally effective, this initiative also needs to recognize and protect the considerable intellectual property that will be involved.

Yes, that is definitely our plan. We have extensive experience working with the Office of Technology Management to submit technology disclosures and patent applications. Thank you for your comment.

Can you describe how this program would be differentiated from the T1 Catalyst program in Devices?

From reading the T1 translational catalyst program for devices, it sounds like it typically concentrates on models that have “some significant validation of the approach in preclinical models”. We would focus on be working with faculty that are mainly in the idea phase of development. In addition, we would work with clinicians that do not have laboratory space to even test their ideas. They would also need our bioengineering expertise to help them design a device and/or formulate an appropriate experiment. Our goal is to work with the faculty to get them preliminary results that would enable them to apply to funds like the T1 translational catalyst program.

I also wanted to emphasize that the projects would include industry partnerships to both develop new industry-initiated technologies and translate UCSF-initiated ideas.

Can you give us some examples of projects that you have shepherded over the last few years? Sounds like you rely on transitioning to grants. Where have these come from? Do they cover the costs of the current center?

The IRC conducts other research besides translational projects including comparative effectiveness research. Yes, our projects cover the costs of the center. The projects are either UCSF-initiated (through internal or other small funds) or industry-initiated (industry-sponsored). Sometimes they transition into larger grants or larger industry contracts. There are several projects that I cannot disclose but below are some examples that I am allowed to discuss: UCSF-Initiated Translational Research: Researchers at in the UCSF Department of Orthopaedic Surgery have developed a novel technique to evaluate the rotational stability of the knee. This technique is valuable to diagnose patients (e.g. rate and distinguish injuries of the ACL, meniscus and other knee structures), assess clinical outcomes, and optimize surgical treatment. Dr. Brian Feeley, a sports medicine and knee surgeon, is collaborating with Dr. Jeffrey Lotz, Director of the IRC, and Ph.D. candidate , Mark Sena, to develop this device. They have since filed a provisional patent for this device which can objectively quantify knee stability. A CTSI T1 award application to develop this approach is currently under review. Industry-Initiated Translational Research: Relievant MedSystems, Inc. (www.relievant.com/company.html) was formed around licensed UCSF technology amongst other assets. The technology was developed through a collaboration between Dr. Jeffrey Lotz and Chris Diederich, originally under research contracts with Johnson & Johnson. Initial clinical trials for a device to treat chronic low back pain were initiated OUS. Currently, the company is conducting an IDE pivotal clinical trial in the US.

Commenting is closed.

Research Networking Software App/Gadget Development Competition

Type: 
Proposal Status: 

Background/Rationale: Social networking sites such as Facebook, Google+ and LinkedIn are web platforms. They allow independent applications to run within their sites to enhance the user experience, often integrating with external services to provide dynamic content as varied as reading lists from Amazon, blog posts from WordPress, to live game play from Zynga. The beauty of these external services is that they can be shared across any software platform that chooses to deploy them.

At UCSF, we recognized the value in making our research networking tool into a web platform. Accordingly, we have contributed an extension to the open-source Profiles research networking software tool, and we are now using our UCSF Profiles installation as a platform (based on the industry standard OpenSocial), whereby we have written "apps" or "gadgets" to extend the functionality. One such gadget is to display presentations, posters and other content that is uploaded to Slideshare.net, within an individual's UCSF Profile page itself. This gadget allows researchers to share conference presentations, lectures etc. easily within their UCSF research profile.

The software code for these gadgets is open source; gadgets can and have been shared with other institutions. The end goal is to create an open library of shareable gadgets for research networking, where any institution (academic, for profit etc.) can contribute to or use the apps in this library.

At this point, we think research networking software tools could make a giant leap forward by harnessing this simple technological implementation and marrying it with many institutions’ knowledge of what features and functions will enable more efficiency and collaboration in the research process. To date, one other institution using Profiles has adopted the standard and another is on track to do so in a matter of months. Wake Forest has successfully deployed Profiles as an OpenSocial enabled platform and has written 2 of their own gadgets. Baylor is now on track to do the same.

Plan: We propose to hold a competition similar to one held by VIVO in 2011. A call to institutions for the best ideas for research networking software functions would be sent out and judged on a set of criteria that would include feasibility, value and impact for enabling research efficiency and/or collaboration.

We will choose 2 winners of the competition and each would receive:

  1. A new iPad 
  2. Plus, development of the gadget itself, using resources from the VH team and our development network to execute. Winner would be acknowledged in attribution on gadget itself once launched.
  3. Recognition via public presentation(s) at Profiles User Group meetings, as invited guest presenter at CTSA IKFC meetings etc.

With the following stipulations: a) the resulting gadget is offered in the library and made available free and open-source, 2) the winner is available as needed as subject matter expert during design and implementation of the gadget.

 

Impact/Value: We believe this project:

  • will further the ability for research networking tools to have impact on the research process
  • will contribute to the national CTSA body of knowledge and experience with research networking
  • allow institutions and individuals with creative ideas for features to share those ideas with the community and contribute in a timely manner
  • further demonstrate that OpenSocial is an inexpensive way to create valuable production level apps given the low technical complexity of gadgets.

Criteria: Criteria for entry would be open to all, but with active promotion to UCSF, all institutions in the Profiles User Group, all VIVO-enabled institutions.  Judges for entries would be identified from the CTSI VH team, CTSI leaders / faculty, and institutions currently leading the research networking software field (e.g., Harvard, and Wake Forest)

 

While we know that we may get ideas easily from IT folks, we will make a concerted effort to gather ideas from researchers themselves. In addition to soliciting the CTSI faculty (for their own ideas in addition to recommendations for others to contact), we will target those faculty who:

 

1)      commented on our open proposal

2)      have very well fleshed out profiles on UCSF Profiles

3)      are engaged in science of team science (we will use UCSF Profiles to identify these peopleJ)

4)      have been past proponents of UCSF Profiles

5)      have been interviewed personally by VH team members (Research Networking 2.0 interviews)

6)      are part of our post-doc user group. 

 

We will create a detailed description for the call for ideas. This will include the scope, i.e., new features specifically for research networking software products, and the judging criteria (to include feasibility, applicability to OpenSocial approach, value and impactfor enabling research efficiency and/or collaboration). In addition, while we plan to promote actively at UCSF, we’ll solicit entries outside of UCSF by leveraging our relationships with other institutions that are heavily engaged in adopting research networking software tools, such as the Profiles User group (including Harvard, Wake Forest, Baylor, Minnesota), Stanford CAP, VIVO (including U Illinois, U Florida), U Iowa LOKI, and the CTSA IKFC National Research Networking Group.

 

 

Total Budget: $33,406

$1000 for iPads, $32,406 for development and project management of 2 gadgets

Collaborators: Wake Forest, Harvard, CTSI leadership / faculty as judges.

Comments

Great concept! It would be helpful to know how many gadgets/apps you expect to award prize money. I also worry that some of the best ideas might be from endusers who don't know anything about programming/IT, and don't necessarily want to conduct the development... You might think about a fixed prize award for several "best ideas/concepts" that don't require any plan for how to develop it.

Like Ralphs idea. Also, the $ requested for funding app development (aka prizes) still seem low - is that supposed to cover 2 apps? And, conversely, my first reaction to your management costs is that its too high ... (at least you might describe why implementing such a competition might cost 25K to administer )

I agree with the comments already posted. This sounds like a great way to bring out some novel new applications. But I see this as possibly creating something of a core of experts in actually programming the app that those of us unable to do that might come up with. Is there already an office at UCSF that this might be linked with?

Innovative and timely approach. like the strategy of the competition though it is not clear how much the requested funds would actually accomplish.

Last December I went to a meeting sponsored by the California Healthcare Foundation called "Free the Data" A revolution to Improve Health Care. Todd Park, Chief Technology Officer from HHS has sponsored national competitions; "code-a-thons" and "health datapalooza's" similar competitions to what Leslie describes, to foster innovative ways to utilize the national databases. There are many mobile and iHealth startups here in the Bay area. I like the idea of the proposed competition, but could it be more narrowly focused, much like this open proposal, and limited to local startups and UCSF researchers with experience writing fundable proposals? The goal might be to raise UCSF's competitiveness for these emerging opportunities. As for the prize money amount; for most participants, the prize amount is secondary to the notoriety gained by winning the competition.

Great idea to harness ideas. However, despite their popularity, the challenge mechanism is perhaps not the best mechanism or the best use of resources to generate "production level" apps that will be maintained and persist. I like Ralph's idea to reward best idea/concept rather than app.

UCSF has long lagged in addressing its technical infrastructure needs for its researcher and patients and I am delighted to see this proposal to help address this gap. Profiles is a great platform for added enhancements and functionality. I suggest focusing the scope to identify the greatest challenges in researcher networking needs to advance translational science, such a daily updated search engine which will automatically identify commonalities across a researchers' portfolio of work and connect her/him via daily emails with colleagues for potential collaborations, funding, etc. An option to add non-UCSF members is an added feature. I would also network with UCSF's Communicator's Group led by Sarah Paris, who is exploring a UCSF "social media" conference, to secure greater UCSF buy-in for this pilot. Finally, this pilot is great opportunity to reach out to local tech and start up companies looking to provide technical support to and advance their areas of support to the fast growing field of medical and information management needs. Either the competition could be expanded beyond UCSF, or specific companies could be contacted for support, collaboration, funding, and marketing.

Commenting is closed.

$15,000 a dose: A patient-centered study of drug development and research

Type: 
Topics: 
Proposal Status: 

Title: $15,000 a dose: A patient-centered study of drug development and research

Rationale: The United States spends $100 billion annually on cancer care, with the majority of costs connected to drug development and technological advances.1 Several researchers have raised concern that the cost of cancer care will continue to rise until it becomes unsustainable.1,2 One strategy for addressing the rising treatment costs is to consider cost when planning clinical trials. Value-of-information theory (VOI), commonly used in economic and decision science settings, is one example of a method to address this issue. Applied to clinical trials, VOI theory could help determine which trials should be funded by considering a combination of factors such as drug price, cost of the trial, and duration of treatment.3 Using this theory, a drug that is projected to offer only similar clinical benefit – such as progression-free survival – as another but is more costly would not be selected for trial funding.

While VOI highlights how consideration of cost is needed at the development phase, we believe that patients have a role to play in this phase as well.4 Currently, no studies to date assess patients’ perspectives about the escalating cost of treatment and the implications for drug development, giving rise to a challenge in the design and conduct of clinical research. High treatment costs have resulted in increased cost-sharing with patients through higher premiums, co-pays, and deductibles. Since patients bear more of the burden of paying for these cancer treatments, it is critical that their perspective is taken into account early in the drug development phase. For example, it is important to know how patients feel about the development of highly expensive drugs that would 1) contribute to the societal burden of increasing health care costs, and 2) likely be too expensive for the majority of patients in the era of increased cost-sharing. It has been suggested that what patients really want from health care (i.e. timeliness, hope, and certainty with no interest in the real cost of treatment or the percent GNP devoted to health care) is often “irrational and unrealistic.”5 We feel that patients must first be presented with information about actual treatment costs and the process of drug pricing, and only then can they be expected to consider the trade-offs of health care decisions as informed, rationale consumers.

Plan: We will conduct a pilot project that will survey and interview patients at the UCSF Breast Care Center about their perspectives on the cost of cancer care and drug development in relation to clinical research. The aim of the study will be to design a patient-centered framework for approaching costs within the context of clinical research. We will recruit a diverse group of participants, including women who are and are not currently enrolled in clinical trials, women with and without a prior cancer diagnosis, and women in diverse socioeconomic, age, and ethnic groups. Participants will be presented with simple scenarios that illustrate the critical issues about drug costs and insurance coverage rates, and how clinical research can influences these issues. These scenarios will be designed with the advice of Celia Kaplan, DrPH, an expert in qualitative methods, using language and terms understandable to the average patient. Participants will be surveyed to understand patient perspectives on the following: 1) trends in cancer treatment costs, 2) implications of cost-sharing (co-pays, high deductibles) on drug development and research, 3) the process of drug pricing, and 4) opinion of using VOI methodology in the early phase of drug development. We plan to frame some of our analysis according to participants’ responses to certain ethical questions such as who should determine how money is spent in health care and how “irrational and unrealistic” preferences should be considered.

This study is “shovel-ready” – the investigative team has worked together on similar projects in this clinical setting and these participant groups, and has patient surveys and IRB protocols ready for submission once funding is secured.

Criteria and Metrics for Success: From the responses and data gathered from this study, a framework for approaching costs within the context of clinical research will be developed.  Specifically, the pilot study will be a success if the study can determine the following: 1) patient views regarding VOI methodology for use in drug development, and 2) guiding principles around patient treatments costs and clinical research.

The broader implications of the study will guide more selective funding of clinical trials (such as through VOI) or a reassessment of these selection methods that considers patients’ perspectives on the cost of cancer care and drug development. The investigators have the unique opportunity to implement the findings of this study in the UCSF Athena Breast Health Network, an innovative collaboration across the 5 University of California medical centers that is integrating clinical care and research to drive innovation in prevention, screening, treatment and management of breast cancer. Our study will develop the knowledge base required to establish a scalable and sustainable approach to the conduct of research in a novel clinical care and research cohort such as Athena. Elissa Ozanne, the study PI, is the Director of Risk Assessment and Decision Science for Athena, and Laura Esserman, a study collaborator, is the PI for the UC-wide Athena Network. Together, these study investigators will design the study such that results can be directly applied to ongoing and future Athena research efforts.

Cost and justification: The anticipated costs are $25,100 to support 1) 5% effort ($10,800) for the PI (Elissa Ozanne, PhD), 2) a clinical trial coordinator (Rebecca Howe) at 20% of her effort ($9,800), 3) patient recruitment and survey costs ($2,800), 4) data analysis costs ($1,200), and 5) supplies ($500).

Collaborators: Elissa Ozanne, PhD (UCSF – Department of Surgery, Institute for Health Policy Studies), Michael Hassett, MD (Harvard Medical School), Rebecca Howe (UCSF – Department of Surgery, Institute for Health Policy Studies), Lisa Bero, PhD (UCSF – Department of Clinical Pharmacy), Michael Alvarado, MD (UCSF – Department of Surgery), Laura Esserman, MD, MBA (UCSF – Department of Surgery, PI of Athena), Celia Kaplan, DrPH (UCSF – Department of Medicine)

References

  1. Meropol NJ, Schrag D, Smith TJ, et al: American Society of Clinical Oncology guidance statement: The cost of cancer care. J Clin Oncol 27:2868-3874, 2009
  2. McFarlane J, Riggins J, Smith TJ: SPIKE$: A six-step protocol for delivering bad news about the cost of medical care. J Clin Oncol 26:4200-4204, 2008
  3. Schmidt C: Researchers Consider Value-of-Information Theory for Selecting Trials. J Natl Cancer Inst 102(3):144-146, 2010
  4. Fleck LM: The costs of caring: Who pays? Who profits? Who panders? Hastings Cent Rep 36(3):13-17, 2006
  5. Detsky AS: What patients really want from health care. JAMA 306(22):2500-2501, 2011

Comments

This is a great idea on a very rich and sensitive topic! How is VOI going to be introduced to a population that may not have all the underlying knowledge to understand such a sophisticated model? It seems like the very understanding of the tool might affect their judgement on how to use it.

This is an excellent point. We plan to conduct semi-structured interviews with patients to introduce the concept of VOI. We will assess their understanding of it and their opinions on the appropriateness for use in this setting. In this process, we plan to convey the concept of VOI for the patients in a very simple format, knowing that many may not have any experience with it.

What an interesting idea and approach. You might want to bolster the section on your relationship with the Athena network. If the Athena network will commit to utilizing the results to create a framework for decision-making -- thats stellar, and turns this from a knowledge generation project, to a potential model that improves the value of health research. Also, I'm sure you've seen this, but a tangentially related article -- "What Patients Really Want from Healthcare" http://jama.ama-assn.org/content/306/22/2500.full?sid=39824a5a-4e5b-410c... talks about how our assumptions of what patients may think important could be inaccurate. Makes me think that you might consider framing this patient-VOI effort as one where you'll need to determine which subset of patients/patient-type this kind of an approach may be best suited (indeed, like Cancer)

Thanks for the great suggestions. We will re-write the section about Athena to more fully describe the potential for impact this project holds through the Athena Network. Also thank you for the suggestion regarding differing patient opinions about VOI based on clinical context and the JAMA article. We are agree that they might be very different, and are starting in the cancer setting, at the UCSF Breast Care Center. So we will first look at patients who have had cancer and compare to those who have not had cancer. This approach will give our results the most relevance for application in the Athena Network. Given the small scope of this project, we aim to address additional subsets of patients in future work.

Thanks for the comment and for passing along the article. The article also raises some interesting ethical questions, such as who should determine how we spend money in health care and how should "irrational and unrealistic" preferences be considered? We likely will not be able to address these questions in depth in our research, but they may help us frame our goals more broadly.

This is a novel and intriguing approach - akin, if you will, to a very sophisticated feasibility analysis. It seems well worth the modest cost. It would be nice if the plans included strategies to broaden the number and disciplines of the potential collaborators.

Thanks, we agree that broadening the number and disciplines of collaborators would strengthen the proposal, and we are currently reaching out to other potential collaborators within the Athena network as well as researchers with experience in related fields who are not affiliated with the UCSF Department of Surgery.

Thank you Bill for the comment - we agree that increasing the collaboration would benefit this project. Do you have any suggestions for possible collaborators? Thanks.

I really like the idea of considering cost and cost-effectiveness in drug development decisions; however, compared with other relatively fixed drug characteristics (effectiveness, pharmacokinetics, etc), price is the easily modified. I'm not sure the go/no-go decision to fund a study of a promising drug should be based on expected price - Why not just CHANGE THE PRICE of the drug (or how much insurance covers of the drug) after the study is completed based on the estimated effectiveness and the cost of competitors?

That's an interesting point that drug costs are relatively flexible compared to other drug characteristics. There are some drugs (i.e. monoclonal antibodies for cancer treatment) that are highly expensive because of complex production requirements, but I agree that the pricing of drugs is not a transparent process. It will strengthen our research to consider not only what patients think of the cost of drug development, but also what they think of the process of drug pricing. Thanks for the feedback.

Thanks BIll for the comment - great point, and one we will take into consideration. In the realm of cancer treatments, there are some therapies, as Rebecca describes, that do not have the possibility of being "affordable" due to the nature of their development. Others have more flexibility, and it would be beneficial to hear patients' perpectives on this also.

Commenting is closed.

Broadband Multichannel Transmitters for Enhanced Metabolic MR Imaging

Type: 
Topics: 
Proposal Status: 

1. Rationale

Surbeck Laboratory for advanced imaging at UCSF, is an interdepartmental and interdisciplinary laboratory that provides unique instrumentation, expertise, and infrastructure to enable the faculty, trainees and staff to carry out translational and clinical research utilizing the unique capabilities of high magnetic fields. Research at the Surbeck Lab has resulted in some of the most advanced MR instrumentation and expertise currently available and has been the source of critical technologies and methodologies in multiple areas of non-invasive biomedical research at cellular and molecular scale.

With quantitative capability, magnetic resonance imaging and spectroscopy (MRI/MRS) have become a promising non-invasive imaging modality for biomedical research. This application seeks funding to enhance our quantitative imaging capability through development of broadband multichannel transmitters on the high-end whole body MR imaging systems. Currently our MR systems at Surbeck Lab are equipped with only 1 transmit channel. It has significantly limited the capability of imaging and restricted its applications to conventionally slow and outdated approaches. The proposed development will enable the advanced imaging technologies on our UCSF MR systems and significantly improve our research infrastructure. With the multiple transmitters, parallel imaging techniques with fast selective excitation and B1 shimming can be used to dramatically accelerate the imaging speed, improve imaging sensitivity and reduce sample heating. This will be a shared instrumentation and would significantly benefit a wide range of biomedical research in a qualitative fashion and facilitate translational and clinical research projects within UCSF community. This development could give our quantitative imaging resource a quantum leap and move imaging capability of UCSF to the forefront of the field. This is not about a local competition, but rather about making UCSF a world leader in in-vivo MRI/MRS methodology and its medical, biological and pharmaceutical research applications. This would be an invaluable asset which benefits the whole UCSF community tangibly.  

2. Plan

This project will be accomplished via the following three steps.

a) Design and construction of broadband 16-channel transmitter system with capabilities of high transmit power, independent amplitude and phase control, and independent RF pulse waveform generation. This system allows for parallel transmit, B1 field shimming and increased MR sensitivity, consequently resulting in high sensitivity, high temporal resolution, uniform image, and reduced tissue heating; The broadband capability enables multi-nuclear and multi-field-strength MR imaging.   

b) Development of user-friendly interface software and integration of the proposed multi-channel broadband transmitters to the GE MR control software, ensuring the compatibility with the GE MR console;

c)  Broadband transmitter installation, testing, validation, and safety assessment on 3T and 7T systems.

3. Criteria and metrics for success

The proposed multichannel transmitters will be tested on bench. Each transmitter channel should have a frequency range of 120MHz to 300MHz, a phase range of -200 to 3800. RF power attenuation should be able to control the output from 30dB to 5dB with max 200W output. MR imaging experiments validation will be performed on the 3T and 7T MR imaging systems housed on UCSF Mission bay campus. In MR imaging experiments, the overall B1 field pattern can be controlled by changing RF amplitude and phase of each transmitter channel. Improved MR sensitivity and reduced SAR should be observed.  

4. Approximate cost and very brief justification ($10k-max $100k)

We request budget to buy required electronic parts for constructing the proposed multi-channel transmitters, including capacitors, inductors, phase shifters, attenuators, mixers, coaxial cables, RF connectors, and also circuit board manufacturing, etc ($35,000); General lab supplies for constructing and testing the transmitters ($10,000); MR scans for performance validation (10 hours x $500/hr = $5,000); 15% effort of the PI (Xiaoliang Zhang, PhD) who will be responsible for the administration and direction of the project, also aid in designing, constructing, testing, and validating the proposed transmitters, help in equipment installation and hardware/software integration, and assist in the data analysis and interpretation ($21,750). 30% effort of an engineer (Yong Pang, PhD) ($13,500). This person will design the transmitter circuits and required software for user-friendly interface to control the transmitters; Total direct costs requested: $85,250.

5. Collaborators

Collaborators of our research lab include Daniel Vigneron, PhD; Sarah Nelson, PhD; Sharmila Majundar, PhD; Christopher Hess, MD; Xiaojuan Li, PhD; David Wilson, MD; Duan Xu, PhD; Orit Glenn, MD; John Kurhanewicz, PhD; Jin Liu, PhD; Sabrina Ronen, PhD, Pedar Larson, PhD; Roland Kruger, PhD

Comments

Interesting project which will dramatically enhance the quantitative imaging capability of our existing high-end MR instrumentation. The project seems doable though I suspect that the anticipated budget is too low for the goals and it would have been nice if the plan included strategies to broaden the number and disciplines of the potential collaborators.

Agree that this doesn't seem like enough to get the project to a working state. Is the plan to build a case for outside funding?

This does sound like a fascinating project. Because it is technical in nature, I am curious as to how the upgraded transmitters would integrate with existing infrastructure, and how scalable the initial investment would be. Would this project generate new costs and/or revenue beyond its development phase?

Thanks so much for taking time to review this project. I agree that the budget is tight. Our goal is to design and construct a lab prototype of the multichannel transmitter system to enable the use of emerging imaging techniques and advance our quantitative imaging capability. Many blocks of the proposed system, including all the circuits, will be made in house by using the fabrication facility provided in Surbeck Lab for Advanced Imaging at UCSF. This would significantly reduce the cost compared with subcontracting the jobs to outside manufactures. Despite a prototype, the system should have the expected performance and should be able to operate with MR imaging systems in Surbeck Lab. I feel that the current budget is reasonable for developing such a prototype although it’s tight. We are trying every effort to reduce the cost and hope to make this development endeavor possible in this lean year. Of course, the proposed budget is certainly not enough to develop a commercial product-like system which would involve a large amount of efforts on production engineering, including intensive mechanical design and fabrication. In terms of broadening the collaborators, since the proposed system is a shared instrumentation, we welcome any faculty members from any disciplines to use it if they think this advanced quantitative MR imaging system is of help to their research. We are also very happy to provide any necessary assistance for performing the imaging experiments. With the success of this project, we would be able to plan larger and more comprehensive projects with an emphasis on performance improvement and preclinical applications of the system for outside funding opportunities. This increased imaging capability enabling advanced imaging technologies would also help to increase the likelihood of success in our routine NIH grant applications. This system will be designed in a PC-control style. Our intention for this design is the easy integration with the existing MR imaging scanners. The multichannel transmitter will be synchronized for MR signal excitation and acquisition by a clock signal generated from MR imaging scanners. Based on our tests, integration with existing MR system should not be a problem for the proposed design. Each channel is well decoupled from others and is highly independent. The number of channels is scalable. Certainly, less channel number would cost less. In the future development, adding more channels is convenient; at least no need to pay more efforts or costs on R&D. Basically this system is maintenance free and will not cause additional costs on its operation beyond its development phase. If needed, a recharge will be setup for use of this system. In this case, revenue can be expected.

Commenting is closed.