2013 CTSI Annual Pilot Awards to Improve the Conduct of Research

To facilitate the development, conduct or analysis of clinical & translational research

Printable Proposal Content with Comments

Use your browser's print function to output proposal content only, with each proposal starting a new page. Print to Adobe PDF to produce a file. (Note: Chrome and IE9 do not support starting a new page for each proposal.)

Development of an Inter-departmental Shared Research Assistant Support Program for Translational Research

Proposal Status: 

Rationale: This proposal is based on two premises: First, it is well known that consecutive sampling is preferable to convenience sampling for essentially all prospective enrollment studies. One of the greatest obstacles to implementation of consecutive patient sampling, however, is the need to have 24/7 research assistant (RA) support for patient identification, enrollment, and data collection. Payment for RA support thereby commonly consumes over half of grant budgets for prospective studies, often rendering potentially meaningful studies impractical. Many worthwhile study ideas are abandoned early on and other fully fleshed out protocols get dropped because of lack of funding for 24/7 RA support.

The second premise behind this proposal is that RAs commonly have significant amounts of “down-time” in between enrollment of subjects. Review of four recent, extramurally-funded studies that were concurrently conducted with separate 24/7 RA support at SFGH showed enrollment of less than 2 subjects per day in one study and less than 1 subject per day in the other three. Discussion with investigators (and even the RAs themselves) has revealed frustrations with the inefficiency and cost associated with the need to have separate 24/7 RA study coverage in same site studies with prospective enrollment.

Plan: We propose the exploration and development of a shared RA program for translational research that we believe would lead to more efficient and cost saving research practice at UCSF. Although details of the development plan are beyond the scope of this one page proposal, we will begin by forming an interdisciplinary task force to perform a needs assessment and review funding and regulatory agency policies regarding shared resources. We will then develop shared RA network models, with CTSI review and input. We will test the approved model(s) through a pilot or feasibility study and report summary findings to CTSI.

Deliverables and Metrics for Success: The immediate deliverables from this work include:

1) A detailed analysis and summary recommendations of the feasibility and obstacles to the interdepartmental sharing of RA support.

2) Implementation of a pilot /feasibility study of a shared RA network based on CTSI review and approval of proposed models.

3) Determination of the potential cost and other resource savings from implementation of such a program.

The potential long-term deliverables of this project would include:

1) Campus-wide increase in the implementation of low-funded and unfunded research, especially by junior faculty and residents.

2) Increase in funded research: Investigators may be able to apply for more grants with low funding ceilings, and their budgets may be scored more competitively.

3) Promotion of an innovative climate of efficient research at UCSF, paving the way toward the development of sharing programs for other research resources.

Approximate Cost: We are asking for CTSI consultation support and $40,000 to conduct the pilot/feasibility study (RA support and administrative costs). Investigators will donate their time in kind.

Collaborators: The investigators have collaborated extensively and have substantial experience in this proposed arena: Dr. Rodriguez is the Department of Emergency Medicine Residency Research Director, has CDC funding and has served as an IRB chairperson. He has developed and maintained a robust ED RA program at SFGH for 7 years with prospective enrollment of over 4,000 subjects in various studies. Dr. Mitchell Cohen is an active trauma and general surgeon and serves as the Director of Trauma Research for the Department of Surgery. He is extensively funded by the NIH, CDC and DoD and is currently involved in several prospective trauma studies, as well as translation basic science work on similar topics. Dr. Hemphill is Chief of Neurology at SFGH and is the PI of the San Francisco hub of the NIH-funded NETT (Neurological Emergencies Treatment Trials) network. His group has conducted emergency and ICU studies related to stroke, status epilepticus, and neurotrauma.

The comments from other investigators have been very helpful. Although there may be slight overlap between this proposal and certain CRS efforts, this affords an opportunity for synergy and refinement of proposed models. Review of the CRS site and other discussion demonstrates that the current CRS model understandably has significant gaps, specifically in the area that we are trying to address with our proposal--  the issue of 24/7 RA support for emergency studies with need for timely response for enrollment (need for the RA to be in the ED within minutes at all hours). We look forward to collaborating with CRS (and others) in addressing this need, refinement of shared RA support proposals and ultimately field testing the CTSI approved final model. 

 

 

Comments

Hi Robert. Coming from clinical research I understand the often emergent needs to enroll a patient and the inherient down time that comes with being an RA. It may be valuable to speak with someone from the Recruitment and Implementation Core at CTSI as they offer a "clinical research coordinator" service that shares time or offers available RA's for percent effort needed and/or available. They have tried to address some of the needs you mention above and may serve as a resource as you continue to develop this proposal. I am happy to connect you if this seems valuable.

Hi Courtney,

Thanks for your comment! It would be great to learn more about this, especially to hear what has worked and not worked in terms of RA sharing. The other collaborators and I were unaware of any such program, at least at SFGH.

 

 

The Recruitment and Implementaion Core has a "shared" study coordeinator service that we have been more broadly implementing. They may be reached at recruitment@ucsf.edu for more detailed information and maybe even a partnership on this proposal. 

This is a worthwhile goal, although it seems to overlap with the efforts already being made in CRS.  I deal with this dilemma as well - i.e. how to piece together the %effort needed to have an RA for multiple studies and also have optimal coverage (i.e. 24/7).  A related problem has been finding individuals who have the background and training to do this work exceptionally well - my experience has been that there is a great amount of variability.  Do we have any idea of how many RAs who work at UCSF have completed formal training in clinical research (e.g. CRA certificates)?

This is a worthwhile goal, although it seems to overlap with the efforts already being made in CRS.  I deal with this dilemma as well - i.e. how to piece together the %effort needed to have an RA for multiple studies and also have optimal coverage (i.e. 24/7).  A related problem has been finding individuals who have the background and training to do this work exceptionally well - my experience has been that there is a great amount of variability.  Do we have any idea of how many RAs who work at UCSF have completed formal training in clinical research (e.g. CRA certificates)?

Thanks Dan. Although CRS may be examining this issue, I (and my collaborators Drs. Hemphill, Cohen and Chambers) are unaware of any such program at the General. My sense is that RA sharing will inherently be site-specific and what works at Moffitt or the CRS may not work at SFGH.

 

With regard to your question about formal training, our RAs all complete the CITI course and then undergo project specific orientation/training. An informal survey of our RAs has shown that they have substantial time and interest in cross-training on other projects at SFGH. The fact that there is no current way for them to do that and investigators' need to share/reduce costs are the bases for this proposal.

We look forward to exploring this in collaboration with CTSI and others.

Hi Robert, it might be valuable to get in touch with the CRS program (http://accelerate.ucsf.edu/research/crs), their services include sample processing and participant recruitement, both of which seem to expand to SFGH.

There are definite synergies with the services that we offer through the Participant Recruitment and Study Management Services where we provide recruitment expertise and trained coordinators as needed on hourly recharge. There are limitations currenly to the hours this service is available. It would be interesting to explore areas of overlap and how the CRS could either address any gaps or how the investigators could leverage what has been learned in developing this service.

Thanks for the insightful comment.  Our studies (trauma, acute stroke, etc) essentially require 24/7 presence/availability for screening and enrollment in the Emergency Department and therefore RAs have to be on-site (at SFGH) to be available within minutes. Does the CRS model allow for this? I would definitely like to explore some sort of collaboration.

Commenting is closed.

Building a Searchable Online Library of Template/Example Language for Grant Proposals

Proposal Status: 

Rationale: One major hurdle faculty face in preparing grant proposals is in the drafting of the non-scientific components—that is, the Resource & Facilities, Resource Sharing Plan, biosketch personal statements, letters of support, and other regulatory sections (e.g., human subjects and vertebrate animals); this is especially true in the preparation of complex, multi-investigator and multi-institutional, Center-type grant proposals. Faculty may look at these sections as bothersome time-sinks, hardly worth time or effort, or as insurmountable challenges that prevent them from applying to a grant. In either case, the ability to procure the necessary funding to conduct of their research can be seriously affected by sub-par non-scientific components in their proposals.

The Research Development Office (RDO) supports the development of multi-component grant proposals and, since its inception in July 2012, has already amassed a collection of template/example language for the non-scientific components of large Center grant applications (e.g., P01, P30, P50, U19, and U54 mechanisms). Of particular importance, we have descriptions of many UCSF resources (general environment, schools, departments, core facilities, etc.); we also have collected Resource Sharing Plans, regulatory sections, letters of support, and biosketches. So far, we have deposited these documents on our office server and kept inventory on an Excel spreadsheet. This is not a viable, scalable, or sustainable way for us to accomplish our goal of creating, managing, and curating a database of template/example language and making it available to any UCSF faculty or staff member (Principal Investigators, Key Personnel, grant and research administrators, Research Management Services staff, Contracts & Grants staff, staff writers, etc.). This library will not replace any of our current functionality in the proposal development process—RDO personnel will continue to help draft personalized sections—but will instead allow us to expand our level of service and free up bandwidth. In particular, the templates/examples will be relevant for a large breadth of documents in addition to large Center grants, including R01s, training grants, and contracts. Sources for template/example language will include the internal RDO library, as well as RMS personnel, UCSF departments, and individual investigators willing to share their samples.  The creation of such a library has been enthusiastically supported by many faculty members, RMS and C&G staff, and administrators, and we look forward to working with them to continually adapt the library to best serve the needs of the UCSF community.

Plan

  1. Consult with a system administrator or web developer to identify an appropriate solution based on the following criteria:
    1. Ease-of-use: possible for any RDO staff member to be a site administrator in order to upload documents and update/manage content, overseen by Dr. Erin Bank, Research Development Specialist
    2. Searchability: keyword optimized for intuitive and easy searching by section title, keyword, and type of document
    3. Accessibility: open to UCSF faculty and staff via the UCSF network at any UCSF workstation or through a UCSF VPN account
    4. Connectability: able to link from the OSR/RDO website, currently under development to be hosted by the UCSF Office of Research using Drupal
    5. Adaptability: able to adapt to different types of documents and files as need dictates
    6. Trackability: count number of visits and downloads
    7. Capacity: room for hundreds of small-format .doc, .pdf, and .xcl files
  2. Work with a system administrator or web developer to implement the system
  3. Train Dr. Bank to be the primary manager of the database once it is up and running. Dr. Bank will also curate templates/examples contributed from outside the RDO

Criteria and metrics for success: We will track the visitors to the site and the number of times each document is downloaded, in addition to the number of text sections offered and the number of edits made to the stored text sections in order to keep them accurate.

Approximate cost and very brief justification: We request a budget of $20,000 to cover consulting fees; site building, authentication, and implementation; software installation; CMS costs (licensing fees, add-ons, hosting); training; and the UCSF server recharge.

Collaborators: A system administrator/web developer with experience working with UCSF; RMS, other staff, and faculty members will be both partners in contributing to the content of the database and users of the database after it is established.

Comments

Really valuable idea Erin. I am interested to know if you may collaborate for a portion of the project with an RMS member and leverage any knowledge they may have as well. I know they have years of experience as well and I can see a collaboration with them as vital to the success across departments. Especially as they have been working to harmonize more of their processes I see the fusion bretween RDO and RMS rather imperative. 

Courtney,

 

Thank you for your comment. You are right in that the RDO and RMS, and indeed C&G, are necessary partners in this venture. In fact, RMS has been one of the biggest advocates for the RDO to establish such a database. This database will benefit from our continued working relationship as colleagues within the Office of Research, especially with RMS and C&G contributing the language they may have accumulated and using the language as they help researchers with grant submissions.

Dear Erin,

 

This is a terrific idea and would address a much needed gap in training grant support at UCSF.  I was so appreciative of all the work you put into the Resource Section of our large T32 training grant.  It would not have been possible to do that without you guys.  Hence, streamlining this for the whole UCSF community should be a top priority.  I would also just add that something similar should be considered for the Tables that must be included in the applications (how many postdocs apply to each UCSF Department per year, how many are URMs, how many are veterans, etc, etc).  If this ends up working for your resource section, I hope the university might consider asking the office of postdoctoral affairs and the graduate programs to do something similar for the tables. 

 

Best,

Scott

Hi Erin,

This is a great idea.  Is streamlining the process for writing Data Management Plans part of the vision of your proposal?  If so there might be potential for collaboration with the UCSF Library through use/adoption of the DMPTool (Data Management Plan Tool).  I just returned from a workshop on the DMPTool https://dmp.cdlib.org/, hosted by the California Digital Library (CDL), and and there was a lot of discussion about customizing the tool to reflect institution-specific language and examples of Data Management Plans for specific types of grants.  If the RDO has standard language or templates it is using for Data Management Plans it seems like a great opportunity to connect your work with the DMPTool effort so that the tool can be customized to reflect your recommended language, or include any UCSF–specific language that has already been successfully implemented by the RDO (or other offices here at UCSF that facilitate grant writing and submission).

 

Hi Megan,

 

The RDO has also had conversations with the CDL and has learned what a great tool the DMPTool is for creating customized language for DMPs. Our database will contain more generalized language, hopefully which investigators can use as a foundation for then using DMPTool. We will make sure to link to the site on our "Resources" page on our new website. Thanks for your comment!

This is a great idea, Erin!  I will certainly use this resource when it becomes available.  I would also urge you to let the RSAs know that such a resource exists.

Thanks, Wendy. RMS has been one of the biggest supporters of this endeavor, so we look forward to working with them on its development!

We've already discussed that this is a great idea and I'll say that again! Two thoughts:

- I'd include a broader range of docs such as budget justifications, budget planning templates, etc

- My guess is that a lot of home-grown templates already are being used so it would seem you'd want to systematically ask for examples from across the campus

 

Kathryn

Kathy, thank you for your comments. They are both great ideas that are well within our scope. We would certainly rely on RMS regarding budget documents. In addition to the language that is generated within the RDO, we would certainly welcome submissions of language by faculty and staff on campus who have been working with their own templates, if they are willing to share them. 

Hi, I've been fortunate to have great help from the RDO on several large proposals recently.  And I know that finding the correct text for these required grant components is critical.  My concern is that the IT-based library of documents is only part of the answer.  The key is to assign the correct person in each unit the responsibility of writing and then updating these documents, and making sure that person is available to help with updates specifically tailored to new grant proposals.  So my suggestion would be to add more human resources to your proposal.  And to make sure that pre-award personnel are fully aware of the process, resource itself, and key people to contact.

Dear Barbara,

 

Your point is very well-taken: certainly RDO personnel will be actively involved in writing and maintaining these documents. We will also continue to offer "personalized" language as one of the services we provide during the development of large program project grant proposals. We will be sure to emphasize these points in the final proposal. To address your last point, as I've tried to indicate in other comments, our relationship with RMS under the Office of Research necessitates these two offices, along with C&G, work closely together on developing this database. You are right that it is necessary to streamline the process and maintain open communication with other pre-award personnel.

This could be a very useful resource to applicants seeking intramural funds through RAP or other programs, especially if they are junior faculty or fellows/post docs.

Emy Volpe

Erin,

this is a great idea and would be a great resource for my work and other faculty.

thanks for putting this forth.

Bonnie

This would be tremendously useful for investigators at all levels. Having spent countless hours writing and re-writing basic grant section language, it makes great sense to have such a library to avoid re-inventing the wheel.

Commenting is closed.

Testing new Web-based software for increasing the speed of knowledge creation from translational and inter-disciplinary projects.

Proposal Status: 

NB: This Pilot proposal deals with several completely new ideas and related terminology.  To obtain explanations of these new ideas and terminology, please use the inserted Links to jump to WebSites, for descriptive information, and examples. 

OVERVIEW: Creation of useful knowledge from scientific investigations requires the collaboration of scientists usually separated by space, or both space and time.  Communications between scientists under these conditions could be enhanced by the use of the Web. Unfortunately, the minimum time required for an individual scientist to learn and utilize presently-available Web-tools is so great that communication-enhancement cannot take precedence over the other necessities for success in research: doing the work, publishing the results, and obtaining new grant monies.  Furthermore, the societal institutions of paper-based communications are hindering a paradigm-shift in scientific communication (e.g. the OpenAccess debates).  The project described below concentrates on reducing the minimum-time needed by an individual scientist to create a WebSite that provides a review of the scientist's research area in a way that combines both generality with specificity, thus accommodating readers with a range of different backgrounds and interests.  The project also provides an automated method of creating new inter-WebSite links that readers can use to more easily traverse the Web-based network that will be the basis of knowledge-creation and knowledge-storage in the coming era.  The active-archives created with this project's Web-tools will, unlike current passive-archives, gain value over time.  The project is designed so that all participants (Moderator-Scientists, Experts, Readers, and Archivists) act in their own self-interest when collaborating, thus providing independent motivation for use of the project-tools, which is a necessity for wide adoption of these enhancements to scientific communications.   ALSO, as aid to comparing  the proposal with present methods, there is a "Summary Table Overview" available in the Attachments portion at the end of the narrative portion. It is best to look at this Table NOW, when first reading this proposal. 

RATIONALE:  Under a grant from the National Library of Medicine, we have developed new Web-based OpenSource Software that will increase the speed of knowledge creation (CreateKnowledge-Link) out of research information from translational and inter-disciplinary projects.  The Pilot project proposed herein will test how well this software is accepted by a group of intended users at CTSI, and provide a path to making the software available to all other CTSI sites across the country.

PLAN:  Within the UCSF CTSI, this project will beta-test two software programs (described below): 1) Creating WebCompendia WebSites, and 2) Using the ForwardLink-Protocol.  Volunteer users will be recruited from within CTSI, from PostDocs and K scholars, by means of email, WebSites (including CTSI's WebSite and CTSI Blogs), lectures about the benefits of using the programs, and personal contact with CTSI mentors and research advisors.  The two software programs will be available on CTSI servers, and, at present, be limited to beta-test volunteers.  Volunteers who participate will be asked to inform us of any difficulties in using the programs and also be invited to suggest improvements or additional functionality that they could imagine would be useful.  All comments can be either signed or anonymous.  Under this Pilot Project, the OpenSource Software will be user-debugged and new functionalities may be added by the programmers who have worked on this project.  

   The volunteers and their mentors will also be requested to fill out an anonymous questionnaire about the factors that led them to volunteer. This information will help identify the most-effective communication means of informing scientists and academic clinicians about the capabilities and personal benefits of the new system.  If the pilot study is successful, and since the the methodology is clearly scalable to any number of servers, the OpenSource Programs would then be made available to all other CTSIs across the country, using these "most-effective means" to inform users at the other CTSIs about the benefits.

   The WebCompendia WebSite program will be used to create concise, yet comprehensive, OpenAccess, CreativeCommons WebSites that are basically stylized, highly-moderated blogs where, unlike a Wiki, no material can be posted without the explicit approval of the (scientist) Moderator.  Each WebCompendium will center on a topic of interest to the Moderator, and will, by the means of its structure (ExpandingOutline-Link), organize, out of the multi-dimensional research information that is accumulating, a narrow slice of information, together with an objective evaluation of the knowledge that can be derived from that slice of information (CreateKnowledge-Link).  WebCompendia will make it easy to address research questions using Strong-Inference, a method that "sharpens the cutting-edge" of scientific enquiry (ExpandingOutline-Link).  Each WebCompendium will be peer-reviewed by experts in the field, and each will automatically create an online-community of like-minded scholars interested in the same slice-of-available-knowledge, out of which new collaborations and/or jobs could arise (SelfInterest-Link).

   These WebCompendia (which will initially be found by Google or Yahoo searches) will also be inter-linked by means of the ForwardLink-Protocol, which codifies new non-semantic Web linkages (which are unlike Google or Yahoo searches).  The links will be automatically created by use of the WebCompendia program (ForwardLink-Link).  These Links will, to scholars, be much more informative and much easier to evaluate than present Web-links and Web-searches (see Sortable Table section in ForwardLink-Link), thus increasing the usage of WebCompendia for communication of scientific ideas and information. 

   The programs to create WebCompendia and their inter-linkages will use online pop-up instructions and easy-to-intuit clicks for ease-of-use without training.  Each user of these programs (whether as Moderator, Contributor-Expert, or Reader) will be motivated to participate based on their own self-interest (SelfInterest-Link), without need for external inducements.  We expect the use of WebCompendia to expand initially based on the needs and interests of students: 1) PostDocs, 2) Senior Residents in medical or surgical training, 3) PreDocs starting thesis work (SelfInterest-Link).

   The two programs are contained within a free OpenSource Content-Management System called TikiWiki CMS Groupware (Wikipedia; TikiWiki), which is maintained and updated by a team of volunteers, after functional OpenSource programs are made available to the team (as will occur after a successful beta-test).  The TikiWiki CMS provides a means of tailoring the System to the specific requirements of WebCompendia.

CRITERIA FOR SUCCESS: 1) As reported by the volunteers, the debugged programs are functional and usable without training; 2) From the anonymous questionnaires and feedback from seminar attendees, effective means are identified that can be used to inform and motivate scientists and academic clinicians to consider using the WebCompendia and ForwardLink programs that are made available on their local CTSI servers across the country. 3) Problems of installation of the TikiWiki CMS Groupware, WebCompendia, and ForwardLink-Protocol programs at other CTSIs are identified during the installation on CTSI servers, and predicted by CTSI IT personnel.  4)  A practical program is developed for promoting the usage of these programs at other CTSI sites, based on knowledge of previous interactions among CTSIs at the national level, and the information from volunteers on the best communication-means.

COSTS & JUSTIFICATION:  Costs are requested for the two programmers that have created the Software. They will correct errors and add new functionality that is described by the volunteers.  These programmers are in Indiana and Texas, and are experienced in adapting TikiWiki to new uses.  The PHP programmer earns $50/hr, and the Tiki WebSite designer earns $35/hr.  Budget = $40,000 for 12 months; the costs for each will depend on the debugging and types of additions suggested by the volunteers. The PI will not charge any salary to this grant.  For travel for the PI to East Coast CTSI, during promotion of the programs, Budget = $6,000.  For use of a WebLink facility to give seminars at other CTSIs during promotion, Budget = $4,000.  Total Budget = $50,000.

COLLABORATORS:  As part of this Pilot project, assistance is sought from several CTSI programs and experts.  We propose collaborations with the following: 1) IT personnel on ease or difficulty in setting up the software on the CTSI server. 2) Faculty and mentors within CTSI with regard to the usefulness of the programs to the volunteers. 3) Planning of the promotional campaign necessary to effectively spread the use of these programs both within and outside CTSIs.  4)  Critiques of the questionnaires to be filled in by the volunteers. 5) Statistics of program usages on the CTSI server.  6) Advice on how best to approach other CTSI programs for them to try out the program locally, with special reference to overcoming faculty and/or IT inertia at these other sites. 7) Collaboration with John Daigre's efforts to use established social media for scientific communications and collaborations. 

Comments

The proposal by Dr. Jewett represents a significant opportunity to advance the way clinicians and scientists (and especially their trainees) communicate with one another via the web.  The implications for translational research are profound.  The software has the potential to revolutionize they way data are shared, analyses are conducted, and results are evaluated and interpreted.   Following the links that have been inserted into the proposal, one can see that the platform is robust and very user-friendly.  The costs seem reasonable and well-justified.  The plan is straightforward and the outcome measures seem appropriate.  Overall, the proposal will enable the CTSI at UCSF to remain at the forefront of web-based communication, knowledge creation, and interdisciplinary collaborations.   I encourage you to support this proposal.

 

Thank you for the comments.  

You emphasize the importance of the proposal to the UCSF CTSI, and this again shows that the design is for a win-win.  In this case the UCSF CTSI can lead the other CTSI's, which in turn can lead other scientists AND clinicians.  The country-wide CTSI effort is uniquely positioned to enable this connection.

I'm having some trouble understanding the project. Are you trying to determine how best to convince biomedical researchers to write about their research using a new style of wiki software? Thanks in advance for breaking it down a little bit more.

I do thank you for this comment.  I've been TOO CLOSE to the development, so that I can't see the Forest!!  I'm slow in responding because your comment has led me to develop a TABLE that shows how WebCompendia differ from present publishing methods.  I will get this table up into the online WebSites by this week-end.  Each morning I awake with a NEW addition to the table.  

 

I will be very interested in your reaction to the table, and whether it clarifies many obscure points.

 

One thing-- WebCompendia are highly moderated BLOGS, NOT a WIKI.  A Wiki allows changes by anyone.  A moderated blog requires the Moderator's explicit permission for adding material.

The observation that the proposal is unclear is very important.  Writing is always improved by such honest comments.  Besides the SummaryTable described above (and in Attachments), I have now included an "Overview" section that does a better job of giving the context and goals of the project.  Only time will tell (as well as new readers) whether this is sufficient.

Please check the Summary Table Overview attachment, and let me know if this makes the project clearer.  Your opinion is very valuable.  Thanks.

Dr. Jewitt is on to something important here. As a scientist and educator, I believe we are going to see a paradigm shift in the way scientific knowledge is exchanged, integrated, and disseminated, and that Dr. Jewitt's software platform and vision for its potential uses represents a significant advance toward this paradigm shift.  The proposal will enable CTSI fellows and mentors to engage in useful  scholarship in areas of critical importance to their own respective fields, while simultaneously providing valuable feedback about the strengths and limitations of the sofware to facilitate its further refinemenet.   My only concern is how to ensure that busy post-docs and mentors can be enticed to spend time doing the writing and scholarship in this project, as opposed to the other scholarly tasks and obligations.  I suppose that the more the use of the software can be aligned with other academic or scientific goals of the mentee/mentor, the greater the chance of a "win-win" scenario in which study participation helps to move some other scholarly task forward, such as writing a book chapter, paper, grant, lesson plan or course curriculum.  I am optimistic about the potential success of this project, but amplification of the plan to ensure sufficient study recruitment would be helpful. 

Thanks for your summary-description of the project and what it may accomplish.  

With regard to having the participant's "finding time" to use the tools, early on in the planning I was told of how good ideas failed if participants were needed primarily for the project's goals.  This led me to objectively plan how the participants would gain from the project, even though it took some of their time.  First, we made one overall goal that the software would save time as compared with present methods.  Second, we examined how to self-motivate each participant (Moderator-Scientists, Experts, Readers, and Archivists).  This is described here: (http://selfinterest.webcompendia.org).  Of course, the CTSI project will test whether these descriptions are correct, and will provide a means to change them if the motivations aren't adequate.

This pilot proposal should be seriously considered by CTSI. It’s important to try new, online methods such as WebCompendia to disseminate research information. WebCompendia provides a new way for researchers to communicate their findings. Researchers are starting to see the value of micro-publications that can be published quickly to a wide audience. WebCompendia will give users immediate access to research and allow users to continue the conversation with new information. Since WebCompendia is based on an existing Open Source project (TikiWiki), it will be easier to expand the functionality of the application for the pilot rather than having to build an application from scratch.

We are in the midst of a revolution in the means by which we exchange ideas. Most current tools lack focus, often resulting in a free-for-all exchange. The WebCompendia tools that use the ForwardLink-Protocol have great potential to help us harness the power of communication the internet has brought us and provide a means of creating focused research communities. This proposed project would provide important feedback on how to fine tune the usability of the WebCompendia tools.

 

I share Dr. Mathalon’s concern that researchers may be hesitant to put in the time to use these tools, but this current proposal is an excellent way to address that very concern. At the very least, this project would point to changes that could increase the usability of these tools.

 

This is an important project, and funding it would help keep UCSF at the forefront of this new wave of researcher collaboration.

 

Add 3/14/13: I taught at a California State Unversity campus where the teaching load is very high, leaving almost no time for research. The same is true for Community College faculty. Hosting a WebCompendia would be an excellent way for such faculty to remain engaged in their field, even though they do not have the resources to maintain a full research project. This would keep them on the forefront of their field, thus having a very positive impact on the quality of science education.

 

 

Thanks for the review, and mentioning a new group of faculty (State-University and Community College) that could contribute to collaboration and benefit from using WebCompendia.

See my reply to Dr. Mathalon regarding motivating researchers to use these tools.

This Pilot project builds and promotes a collaboration platform where interdisciplinary data and information are shared by researchers and practitioners from different fields. One of the impressive features in WebCompendia is the scrutiny of the data to be published. Incorrect or misleading data won’t be published on WebCompendia without the explicit approval of the Moderator. This data filtering function can significantly reduce the search time and enhance the quantity of the search results in the meantime. The implementation of this project is feasible since the supporting software technologies have been developed.

In addition, this knowledge-sharing platform can potentially become a test bed for validation of data mining algorithms and social networking analysis, especially for document retrieval and text mining. Data and roles in this project are heterogeneous and the data will continuously grow with use of the system. This provides an excellent data source in the research of information retrieval, data mining and analysis of big data.

Last but not least, I agree with one comment by Dr. Robert Plantz that this project will assist researchers in teaching-oriented schools by facilitating collaboration and resources online. I look forward to using WebCompedia in the near future.

I am pleased to see all the comments expressing interest in this project. We definitely need new methods of communicating and collaborating about research. What strikes me about this idea is that it has the potential to makes the process of collaborative knowledge creation transparent and visible in a way that the traditionally published article is not. Why limit ourselves to the equivalent of a hand-scribed manuscript when we have the technology available to us today to create dynamic content that can be multi-dimensional and drive research in new directions?

 

I thank you for the comment, and especially for emphasizing that the vehicle for communication on the Web is basically different from paper-based methods.  This is clear with respect to Facebook and Twitter.  It is also true of WebCompendia that use the ExpandingOutline Format so as to organize and present information in a new way.  Then, the enhancements due to easy emails and replies are a further bonus over paper and snail-mail.

Commenting is closed.

Application Management System

Proposal Status: 

Application Management System

Rationale: There are several programs on campus that manage the dissemination, submission, review, and selection of applications for student positions, grant awardees, or other candidate selectees. The Research Allocation Program (RAP) is one such program where this process is perhaps the most complex due to the number of intramural funding mechanisms, the number of review committees, and the nature of the selection process. Currently, RAP applications are submitted online but the review and decision processes are done manually, thus limiting continued efficiency improvement and lowering the ease of program reporting. To remedy this situation for RAP and other related program processes on campus, we would like to develop a single electronic solution that will accommodate the processes for dissemination, submission, review, selection, and reporting. The solution may leverage parts of existing software such as NUCATS Assist (Northwestern University) or CTSI-ART, or be built completely customized with Drupal interfaces.

Plan: The overall goal is to develop a centralized application management system that would coordinate opportunity dissemination, application submission, application review, application selection, and follow-up processes for multiple programs across multiple agencies. Configurable application components and flexible workflows would accommodate changing business requirements and support process improvements. Such a process solution would result in significant longer terms savings for all collaborating programs.

Criteria and metrics for success: An automated application management system can decrease the amount of time and resources administrative staff spends on application processes and follow-ups, as well as on compiling and summarizing application information for program reporting. Designated administrators would have direct access into the system to customize forms and run reports. Communication with reviewers and applicants can improve through automated deadline reminders and standardized decision notices. Broader and deeper analysis of application and review processes would be possible through reporting measures that have not been available before. The system will be a central repository for applications and progress reports and will offer greater ability to measure programs and funding opportunity impact.

Advantages for all users

Applicants can find funding information, apply via a website that is open to the public, and can upload a single proposal PDF. The application can be saved until submission allowing to be built in stages. Potentially applicants could also use the system to enter and submit progress reports online.

Reviewers can find all materials in one place (proposals, review forms, previous reviews). They can receive automated notifications and reminders to complete tasks before deadlines. They can write their reviews in stages and assign initial scores. During the review meeting, they can score proposals “live” as they are discussed.

Administrators can replace uploaded PDF files, delete applications, and submit on applicant’s behalf. They will take advantage of automated messages/notifications/reminders to communicate with applicants and awardees. They will benefit from a system that tracks many variables, including applicant attributes (e.g., name, department, gender, award status, etc.), and reviewers’ attendance and productivity. Summarized financial award information will be available in the system. Administrators can use the system to manage progress reports starting with automated requests for project/progress report updates sent through the system at intervals designated by each funding agency.

Potential Collaborators Development team - Depending upon whether a system is developed from scratch or leverages an existing one, the system development responsibility will lay with Edwin Martin and ISU or our colleagues at CTSI Virtual Home or the SF Coordinating Center or potentially an outside contractor. Requirement owner - RDO [Emy Volpe (RAP), Gail Fisher (LSP)], Suya Colorado-Caldwell (EVC&P Committees), and many other campus programs that utilize application review and selection processes.

Budget: This project can be developed modularly. With the $50,000, we could plan the development of all modules and develop several of the modules. Our intention would be that matching funds be identified that would enable to the full development of the modular plan. Leveraging another existing system may enable the full system build for $50,000.


Comments

Hi Emy,

 

As you know, I'm the product manager for CTSI's Application Review and Tracking (ART) system, a combination of custom sfotware and an off-the-shelf form builder that does some of what you reference in your proposal. 

Our expereince with building ART was that it cost significantly more than $50K to get where we are today. And that didn't include some of the major features in your proposal, like flexible work flows. Nor did it include the off-the-shelf form builder ART uses. In developing your proposal, did you get input from software developers to arrive at the estimate?

Also taking from our learning with ART, would it be possible to enhance RAP, as an alternative to an entirely new system? We initially built the application piece, then added online review and tracking. It seems like there are good things about RAP. Maybe adding or enhancing some components would be a more practical way to meet the needs of your programs.

Lastly, I wonder if one application management solution truly can satisfy all programs. Some of the programs that use ART do so because of the "single proposal PDF" (which means the same application form for all programs, right?) requirement in RAP. Some of our programs require flexible forms with branched logic based on the choices about sub-programs the appliant chooses.

I'm happy to explain or provide more detail to any of my questions as needed. Thanks for putting in your idea!

 

Brian

 

Hi there -

 

Brian, as I read this, it looks to me like the proposal is indeed an extension or upgrade to the current Application, Review & Tracking (ART) system that we are currently managing, rather than a start from scratch kind of thing.   Emy -- is that correct?

 

We've definitely had some discussion about this and as some may recall, when we initially scoped the work for ART, we deliberately excluded RAP mainly because of the complexity of the review process.

 

If it's the right time to re-visit this and there is time, resources and funding, we'd love to discuss. I really appreciate the vote of confidence that the CTSI Virtual Home team has the expertise to develop such a system, but need to think a bit as to whether we have the "resources" at this time ;)  


Look forward to others' thoughts and to discussions on feasibility and resourcing.

Oh yes, finally, other potential collaborators in addition to the Virtual Home team at CTSI, would be folks from the SF Coordinating Center. These guys were brains and brawn behind the ART system :)


thanks!

 

Leslie

Hi Leslie and Brian,

Thanks for your feedback. I will address both your comments at once.

We envision this more like an expansion or enhancement of ART.  We are aware the cost will likely be more than $50k but were thinking that we might be able to design a phased or modular approach. For example, we currently have an electronic submission system (built by a contractor) in JAVA with an associate SQL database. We could maintain this for a time and have it "dump" into something new - a phase I Review Committees management system that woud respect the move of our website to a Drupal structure.

We can imagine that the plannning will be part of this grant upfront and the planning could include how to operationalize a phased approch.

With respect to the universal applicability of a RAP management solution, while it's true that a RAP mgt. system would not directly apply to a fellowship application process, a system built to deal with the complexity of RAP could be more easily modified and used by other programs.

Thanks much,

 

Emy Volpe

Emy,

You present a compelling rationale for developing a solution to support the RAP process.

As you know the Catalyst Program uses the RAP mechanism to solicit and accept applications, but performs all reviews and tracking independently. We have been working with Brian Turner over the last year to implement the management and tracking of submitted projects through ART. In its current form, ART is certainly more efficient than our former 'manual' approach, but there are some critical challenges as well. Our assessment regarding modifying ART as opposed to identifying a new solution altogether is ongoing, and we'd be happy to share our experiences with you.

Ruben.

 

Thanks Ruben,

 

Good to know you are also working with the ART team to find ways to improve it so that could be used by other programs.

It's good to have a common agenda! We'll be in touch.

 

Emy

Something like this would be great for the Limited Submission Program (LSP), as well!  The LSP oversees distinct review committees for every limited submission opportunity (LSO) and with close to 100 LSOs annually, reducing the manual management of particular aspects of application submission and review would be great.  

After talking to Gretchen, there might be other options out there to investigate - whether for ideas on features or software itself.  Late last year, there was a presentation via the CTSA Consortium from Northwestern on their system called NUCATS Assist.  I wasn't able to attend this presentation, I think some other folks at UCSF did though.  In any event, the info on this system along with a link to a video recording of the presentation itself is online here:  https://ctsacentral.org/tools/tool_shop#NUCATS 

 

Perhaps it will be of help/use.

Thanks Leslie!

 

I will check that out. We are scheduled to receive a NUCATS demo by Dr. Warren Kibbe at the end of the month and Nina Jameson and Ed Martin (from ITS) will be assisting us in determining if that system is a good fit for us.

Cheers,

Emy

This is a timely proposal that strengthens the foundation of the research enterprise and mirrors the UCSF Committees Service and Management Portal (CSMP) proposal submitted for the 2012 IT Innovation Contest showing that it could be adapted beyond the research sphere and be of service to the University enterprise-wide.  For reference the CSPM proposal description follows:      

Project Description: This proposal outlines a project for developing, deploying, and supporting a UCSF Committee Service and Management Portal built on the UCSF Salesforce platform and leveraging Chatter capabilities.

Faculty and staff provide support to UCSF’s academic mission by serving on a number of research, clinical, and administrative committees, and their active participation is critical to achieving the institution’s goal of advancing health worldwide.  Currently, there is no single, quick way to determine the number, composition, context, collaborative relationships, and charge of the many committees currently convened at UCSF or what actual impact they have on the success of the enterprise. For example, if one wants to identify existing resources or forums to help move along a specific goal or facilitate a specific decision, there is no existing repository that helps identify that.  UCSF needs a tool that provides quick and easy access to enterprise-wide committee information to senior leadership, committee chairs and members and staff.  The following are key components that such a tool must address:

  • Have committees completed their charge and if so, they should be disbanded?
  • If committees are currently active are they still relevant?
  • Do standing committees make sense or is there efficiency and effectiveness in convening short-lived, task/decision-oriented task forces and/or work groups?  
  • How can a committee’s or committee member’s completion of service and/or work product be appropriately and efficiently acknowledged?
    • How can committee staff work be coordinated more centrally and efficiently?
    • How can ongoing, centralized collaboration be promoted and facilitated?

 The UCSF Salesforce platform offers key features to facilitate committee management, such as standardized contact records for all UCSF staff and faculty (fed by EDS), single sign on through MyAccess, Chatter group and activity streams, support for file versioning, and the ability to leverage evolving Salesforce functionality (less need for costly customization).  Building a committee management tool on the UCSF Salesforce platform will allow users to access all relevant data related to committee management in order to efficiently support the organization, maintenance, and decision making processes for Committee Chairs and Managers (staff support) as well as key Executives needing insight about committee efficacy and outcomes at UCSF. 

 Deliverables:

This project will deliver a web based application using the Salesforce platform that will:

  1. Track relevance of committee charge, purpose, and oversight
  2. View, organize, update, and track tasks and data related to committee management (e.g. agendas, notes, attendance, completed action items, etc.)  
    1. Identify who is currently serving on what committee(s) and in what capacity
    2. Determine how many committees individual members are serving on and from what departments members are being pulled
      1. Track terms of service and manage committee member rotation
      2. Record and report on “credit” for committee service
      3. Manage and archive committee communication
      4. Produce committee analytics
      5. Communicate data with Advance and the Academic Senate’s Service Portal

 Impact and Rationale: 

A centralized repository and service/management portal will have universal impact and:

  1. Minimize administrative burden on all members of our community
  2. Provide up to date information and documentation Allow committee staff to centrally facilitate and manage administrative functions simultaneously for multiple committees
  3. Assist leadership in evaluating existing committee model as well as individual committees and make data informed decisions regarding the need for further investment and/or action
  4. Classify and convene appropriate groups depending on input needed, decision to be reached, or work to be accomplished
    1. Eliminate duplicate committees
    2. Reduce in-person meetings
    3. Create a foundation for collaboration across departments

 Measures of Success:

 The following will measure the success of the project:

  1. Stakeholder support and use of the tool
  2. Enterprise-wide acceptance and use of the tool (estimated at 3000+ users)
  3. Reduction in the use of listservs associated with committees
  4. Ability to report on membership and activities across committees 

Terrific comments, thank you Suya.  Not to mention that RAP only deals with 10 standing review committees whose membership is updated annually.  It would be tremendous to have a tool that does all of that!

Emy Volpe

Emy and Suya have laid out a compelling case for a committee/review management system that can be used across a wide variety of functions, be it campus-wide committee management, evaluating students, the RAP program, scientific review committees in departments for human subjects or animal research, and any other group that needs to manage an evaluation process. 

Commenting is closed.

MicroPub, A Publication Platform for Short Replication Studies

Proposal Status: 

1.    Rationale Modern basic science rewards large papers in highly cited journals.  However, it is difficult for translational and clinical researchers to assess the quality of a basic science paper.  A proxy is the number of citations, but that is a very inaccurate measure; some papers that cannot be replicated have hundreds of citations.  Given the lack of safeguards ensuring publication quality, the intense competition to produce high profile publications incentivizes publication bias (i.e. tendencies for journals to publish experiments confirming its original hypothesis or "positive" results).  An obvious way to affirm or dispute the quality of a paper is through replication.  Unfortunately, at present replication studies are not valued by the basic science community, and they typically go unpublished.  A large amount of translational and clinical research likely yields negative results because it was based on invalid basic science premises.  Our solution is MicroPub, a platform for soliciting and indexing “micro publications” that replicate data from published basic science articles.

 

2.    Plan 

We will create a website that publishes and indexes short replication studies.  Each “MicroPub” will contain an abstract, one figure, detailed methodology, and a short discussion.  Editorial moderation, rather than peer-review, will be used to evaluate the soundness of research. The short format will hasten the publication process.  Validation or refutation by independent investigators will serve as a high-quality measure of a finding’s reproducibility, and by extension, its validity.  This will be a critical resource for determining whether a basic science finding merits investigation at the translational/clinical level.  All papers will be Open Access. 

 

Initially, we will solicit MicroPubs from UCSF researchers.  We will utilize social media and traditional media outlets to popularize MicroPub.  Ultimately, we envision MicroPub being linked to the original article listings on PubMed, as well as related follow-up translational/clinical papers.   

 

2013-03-12 Addendum to Proposal:

We are discussing our goals with the UCSF Library, the Open Science Framework and the Reproducibility Initiative, and we have expanded our proposal as follows:


Many Small-Scale, Technical Basic Biological Experiments Can Only be Replicated by Academic Labs

Although core facilities and outside vendors are able to perform numerous specialized techniques at the same level or in some cases better than most academic labs, many high impact publications use novel methodologies that are technically challenging to replicate. These “artisanal” laboratories rely on apprentices (in the form of graduate students and postdocs), who learn from experts within the laboratory and then develop their own line of research. Thus, a paper can often only be replicated by members of the same academic lab or a competing academic lab.  There is no current outlet for publication of these replication studies, so they are very rarely published.  The scientific community misses out on this valuable information. MicroPub fills this niche.

 

MicroPub Re-aligns Cultural Incentives and Promotes Transparency

There are numerous reasons for the widespread lack of reproducibility in basic biomedical sciences.  But one reason is preeminent—you are not allowed to be wrong. This is institutional (NIH funding) and cultural. Given the current state of NIH funding, allowing the publication of a replication study that failed – admitting you are wrong – is to risk jeopardizing one's career.  And if many replication studies are never revealed to the wider scientific community, why do them? 

 

MicroPub aims to shift this culture.  It provides a venue that recognizes that irreproducibility often does not reflect scientific fraud or sloppiness, but may be due to other reasons.  Instead, irreproducibility is often due to subtle differences in experimental conditions or analyses, or publication bias.  Currently there is no published venue for discussion of these issues.  Although open access publications that utilize post-publication peer-review offer an avenue for online discussions, we feel that commenting per se does not carry the same strength as first-hand experimental data.  MicroPub provides a way to quickly publish first-hand data, and integrates it with open access and post-publication discussions.

 

Moreover, MicroPub provides a venue for scientists other than the initial authors to publish replication studies.  Right now, the peer-review barrier for doing so, particularly when a replication study contradicts the original study, is enormously high.  MicroPub is specifically devoted to studies of this type, so the barriers will be surmountable and the studies can be published and disseminated.

 

Of course, replication studies by initial authors will also be welcome.  It is often members of the same lab who replicate (or cannot replicate) each other’s results.  Again there are enormous disincentives (job security, NIH funding, reputation) against publishing such studies. MicroPub becomes a way for labs to publicly acknowledge and explore reasons for irreproducibility, and ultimately, to establish a reputation for honing long-term, course-correcting, scientifically valid results.  It is our hope that as cultural barriers against admitting error shift, so too will the institutional (NIH funding) barriers. 

 

2013-03-14 Addendum to Proposal:


MicroPub Keeps Track of Methodology Requests and Serves as a Detailed Methods Repository

One of the major barriers to replicating biological experiments is the unavailability of clear step-by-step methodologies and help with reagents and equipment. Due to journal space limitations, authors are unable to explain exactly how they did their experiments, and rely on citing previous publications that describe similar methods. However, methods are usually modified and these changes are often the key to getting the published results. As there is currently no accountability, requesting for protocols and reagents can be a slow, tedious process. MicroPub aims to solve this issue by providing a custom contact and tracking service that emails the first and the corresponding authors, explaining that a replicator wishes to replicate a particular figure in their publication. MicroPub will then announce this request on the website and keeps track of how long it has been since the original authors have been notified, and whether they have responded. These protocols are published onto MicroPub directly and become available to not only the replicator, but also to the wider community of scientists who may also wish repeat the experiment. To provide incentive, both the original authors and the replicator are credited for the methodology section. There are also cultural incentives, as collaborations are often predicated upon shared protocols and reagents. This act of sharing is one of the many benefits of attending scientific conferences, and MicroPub can provide a platform for these kinds of productive social interactions online.

 

3.    Criteria and metrics for success

MicroPub fits squarely within the mission of CTSI.  MicroPub “nurtures communication, encourages collaboration, fosters innovation, and catalyzes the successful conduct of research”.  Most importantly, MicroPub has potential to revolutionize translational and clinical research because it will ensure that researchers only pursue investigations on basic science findings that have been independently validated many times.  The success of the initiative will be measured by the total number of MicroPubs and by the number of MicroPubs used as justification for translational/clinical follow-up research.

 

4.    Approximate cost and very brief justification ($50K max)

The primary costs will be the development of a standardized publication format and of the website. Approximate initial cost: $35K

 

5.    Collaborators

We are a team of two, S.Y. Christin Chong, Ph.D., postdoctoral fellow, and Jonathan Russell, M.D.-Ph.D. student.  Both of us have extensive experience in scientific publication, science writing, and social media.  We will hire a developer experienced in creating web-based publishing platforms and social web services. 

Comments

Fascinating idea! How would MicroPubs compare to "traditional" research blogging?

It might be useful to find a partner at the UCSF Library, or the California Digital Library, as they also spend a good deal of time thinking about the future of research publishing.

Thank you for your comment! We see the future of MicroPub to be complementary to traditional research blogging, and there are three major factors that we think will differentiate MicroPub and ensure its success:

1. Generation of Primary Data
Traditional blogging materials are mostly consist of meta-research or opinion pieces, but MicroPub are based on primary research conducted by its author. "Action speaks louder than words," as the saying goes, and there is more implicit trust if original scientific articles are supported by independent replicated data.

2. Direct Incentives
MicroPubs aims to be considered a form of legitimate peer-reviewed publication. Although there have been recent discussions on including other forms of communication such as blogging when considering the scientific output of an individual, the reality is that peer-reviewed publications remain the gold standard when evaluating academic and industry job candidates. MicroPub will complement traditional peer-reviewed publications by showcasing solid scientific contributions that may not have resulted in publication otherwise.

3. Appropriate Audience
We feel that replicated data only makes sense if it is associated with the original article, because the intended audience is those who wish to know if the data presented by the original article is indeed reproducible. One of the goals of MicroPub (which differs from traditional research blogging) is to link replicated data with the original article on PubMed, which is the primary way used by most researchers to find scientific articles.

Thank you also for your suggestion on developing additional collaborative sources--we will contact the UCSF Library and the California Digital Library and tell them about our proposal with CTSI!

The Library is very interested in helping promote and support alternative publication models (alternative to the traditional subscription journal).  We are supportive of open access as a model to help disseminate and make use of research results. There are some tools available through the California Digital Library that might be helpful for this publication. We'd love to talk with you more about this proposal.

Dear Anneliese,

 

Thank you for your comment! We are very excited to collaborate with the Library and I hope to continue our discussion through email.

This is certainly an interesting idea, but you're 5 years too late: I needed you in grad school when I was futily trying to follow protocol from published papers!

 

My only concern is that with limited bandwidth and resources, researchers may not be too keen to spend the time and money on replicating experiments. Apart from a small set of scientists who may do one or two experiments "out of the goodness of their hearts," how do you envision a sustainable way to keep researchers motivated to conduct these experiments, knowing they're not being compensated in the "usual" ways (publications, etc.)?

Hi Erin,

Thank you for your comment--we definitely share your frustrations and want to prevent future budding scientists from experiencing the same! 

To address your concern:

As research is conducted from "the shoulders of giants," researchers often have to replicate studies as a prerequisite for exploring their own hypotheses based on these findings. However, these results just languish in lab notebooks because of the lack of avenue for publishing. If they're being done, why not have a quick and easy way to publish them?

 

Specifically, MicroPub can be a great way for graduate students to formally publish their first work. Often times graduate work is based on previous published data, and MicroPub can be a stepping stone towards learning how to put data together and present it to the scientific community.

 

Furthermore, as more postdocs are leaving for industry jobs, they may never get around to finishing their big paper! MicroPub can be used to demonstrate scientific productivity and proficiency in specific techniques sought out by biotech firms.


In addition, while we recognize that currently this type of publication would not be as "prestigious" as a normal paper, the goal is ultimately to elevate the worth of replication studies, since reproducibility separates science from mere anecdote, and too many papers nowadays are, alas, not reproducible. 

The idealism of this proposal is readily apparent.  I regret to present the realism.  Let's say, to start an area of investigation, the PreDoc tries to replicate a published result, and DOESN'T.  What are the alternative possibilities (see http://www.pubmedcentral.gov/articlerender.fcgi?artid=2048741) ?  One is that the experimenter didn't know ALL of the necessary parameters, as compared with the experimenters that published the original article.  Now, what should the PreDoc do?  Should they publish this, so that doubt is created about the original article?  And if they do, if the original authors point out the mistake in the methodology, will the PreDoc ever work in this field?  THIS PROBLEM is one for which there has never been an adequate solution, and, unfortunately, the publication-barrier is NOT the problem.  And "lowering the bar" isn't good because this means that the error in the methodology might not been in the MicroArticle.

The other problem is that NIH will NOT fund replications, so a future grant application that lists the MicroArticle may come under suspicion.

The way that errors are detected is that the CONSEQUENCES of the original finding are NOT realized.  This isn't as good as a  replication, but it is better than nothing.  Alas, many findings don't lead on to other work.  So ..... well I better stop now.

 

Full credit for the idealism!

Dear Dr. Jewett,

 

Thank you for your thoughtful comments.  With regards to experimental parameters and methodology, you have pointed out some issues with our initial proposal, which we have modified (see above).  We concede that the proposal is idealistic.  We also realize that change has to begin at the grassroots academic level. MicroPub provides a basic infrastructure that can be presented to policy makers in order to restructure the way biomedical science research is practiced. We believe that the goal of NIH is to fund reproducible research, and ultimately policies have to respond to the fact that implicit trust is now circumvented due to grant shortages.

 

Also, although NIH does not directly fund replications, replication is often conducted when pursuing novel hypotheses in the form of control experiments. However, these experimental replications languish in lab notebooks and internal meetings. The availability of this additional data will help scientists and the public evaluate the soundness of the research.

 

All reforms are idealistic, but like open source publications, some of them stick.  It doesn't hurt to try.

Commenting is closed.

Tools and Infrastructure to Facilitate Compliance with ClinicalTrials.Gov Results Reporting

Proposal Status: 

Rationale:  US law requires researchers to register clinical trials within 21 days of enrolling the first participant (since 27Sep2007) and to report clinical trial results in the ClinicalTrials.Gov (CT.gov) online system within 12 months of the last participant visit.  Reporting results does not mean publishing in a peer reviewed journal, but rather entering information in CT.Gov’s standardized system.  Noncompliance can result in the following for institutions and individual PIs: fines of up to $10,000 per day, withholding future federal research funding, preventing study publication, and/or public notice of noncompliance.  Despite legal requirements and potential penalties, researchers still lag in mandatory reporting with only 22% compliance nationally (e.g. Prayle et al. BMJ 2012) and 73 UCSF trials campuswide currently overdue in reporting results.  In 2012, the US Office of Management and Budget increased their investigator burden estimate to 41 hours for results reporting in CT.Gov.  In short, compliance is hampered at least partly because CT.Gov is not an intuitive, user-friendly system.

Plan:  The UCSF Dental Data Coordinating Center (DDCC) has already partnered with the CTSI Regulatory Knowledge and Support (RKS) Program, UCSF Office of Ethics and Compliance (OEC) to develop a set of 5 SAS macros to summarize trial results data in a format suitable for CT.Gov Simple Results tables (http://hub.ucsf.edu/basic-results-sas-macros).  This funding would allow expanding those macros and developing other tools to meet the needs of UCSF investigators to comply with federal law.  The CT.Gov system is frequently updated and expanded, so this support would allow the tools to stay current with both required and optional reporting elements.  For example, one current macro allows 3 types of outcome measures: number (frequency), mean, or median; others such as geometric mean which CT.Gov allows will be added.  By developing and refining training materials and consultation, staff supported under this funding mechanism will provide technical assistance to UCSF investigators to help them use these free tools to meet compliance requirements.  Staff will also help troubleshoot the problems that UCSF CT.Gov Administrators identify.  The UCSF DDCC will set up a recharge for these technical assistance and training services.

Criteria and metrics for success:
•    Develop and disseminate additional training materials and tools to be posted on the ClinicalTrials.Gov webpage of the UCSF HUB website (http://hub.ucsf.edu/clinicaltrialsgov).
•    Reduce the noncompliant UCSF trials from 73 to 19 (i.e. slightly more than 1 per week over 52 weeks).
•    Establish a recharge mechanism to provide technical assistance and training to use results reporting tools.

Approximate cost and very brief justification ($50K max):
Most of the proposed budget of up to $50,000 (equivalent to just 5 days of possible federal fines) would be used to create a $48,000 fund from which the 73 current UCSF trials as well as future trials upon completion could draw to pay for technical assistance and advanced training in CT.Gov results reporting provided by the DDCC at rates similar to the CTSI biostatistics consultants.  (With an assistance fund this size, 54 UCSF trials could receive an average of 8 hours of technical assistance to attain compliance.  A smaller award would create a smaller assistance fund.) To make this project “shovel ready” prior to setting up a formal recharge mechanism, percent effort of staff will be allocated to the fund proportional to hours worked.  The remaining $2000 will be used to facilitate setting up the recharge mechanism.  Following this proposed CTSI grant, this service would remain as a sustainable service with a recharge mechanism and could serve other UC health science campuses (which are also struggling to comply with results reporting) and other institutions with CTSAs.

Collaborators:
This proposed work is a collaboration between personnel affiliated with the UCSF DDCC and the CTSI RKS Program, UCSF OEC.  Steven Gregorich, PhD statistician, is Director of the DDCC which coordinates 4 large prevention trials for three U54 cooperative agreements (UCSF, Boston University, and University of Colorado—Denver).  Stuart Gansky, DrPH biostatistician, formerly directed the DDCC and now directs the UCSF U54 Center to Address Disparities in Children’s Oral Health (known as CAN DO).  Marlene Berro, MS, RAC of the OEC, as the UCSF administrator for ClinicalTrials.Gov, maintains the list of records with problems.  Nancy Cheng with master’s degrees in computer science and biostatistics has written SAS macros to produce Simple Results tables for ClinicalTrials.Gov.  Elaine Cooperstein, MS, CCRP and Sarit Helman, MPH are clinical trials specialists with experience with SAS, OnCore and ClinicalTrials.Gov.  Terri Sonoda is the budget analyst for CAN DO and the DDCC.

Comments

Great idea! Can you give more details on how you plan to do outreach to PIs of future trials, so as to lower noncompliance by about 75%?

yes.  sorry due to the 1 page length we couldn't put everything in there. 

we plan to work with CTSI Regulatory Knowledge Service and UCSF Office of Ethics and Compliance and UCSF ClinicalTrials.Gov Administrators to contact UCSF "Responsible Parties" (usually PIs) whose ClinicalTrials.Gov submissions (registrations of new trials and reporting results for completed trials) have errors.  We will also post information about this service on the UCSF HUB website. We will work together to identify how best to make this service known.  In addition we will work with UCSF representatives to UCOP policy groups on this issue to let other campuses know about the service.

great idea --I had little idea I remained in non-compliance. The cancer center should consider individual emails to alert PIs as none of my colleagues seem to reall this.

thanks.  I think the situation was exacerbated by the fact that previously many sponsors (eg NIH Institutes and Centers) were the "Responsible Party" and registered the trials.  Then when they realized how much effort is involved, they transfered responsibility to the PIs and that communication may not have included details on what is entailed. 

We all want to be compliant but don't always know how or even that we are not in compliance.  Just  this fall I had no idea I was (briefly) in non-compliance for not logging in to actively demonstrate that my ongoing registered trial still had the same status of recruiting participants - I didn't have to change anything in the registration, I just had to confirm there were no changes to become compliant.

If this works at UCSF, might we be able to disseminate it to the national CTSI consortium?

we can certainly disseminate the training materials and SAS macros.  in fact we already have made the first batch of SAS macros available on the UCSF HUB website: http://hub.ucsf.edu/basic-results-sas-macros

other aspects - most notably the 1-on-1 technical assistance - will not be very scalable but once UCSF and UC system needs are met, we could offer services for recharge for non-UC investigators.

Commenting is closed.

Decreasing Time for CHR Approval of Full Committee Review Applications

Proposal Status: 

Rationale: There is currently an unacceptably long delay in the time required to obtain study approval by the UCSF Committee on Human Research (CHR). The CHR must review and approve all studies involving human subjects performed by UCSF faculty, staff, or students. Studies with more than minimal risk to participants, such as trials of diagnostic tests or treatments, require in-depth review by a faculty-led committee, a process referred to as Full Committee Review. Currently, the average time from submission to approval of these studies is 84 days. This long wait time can be a major obstacle to implementing a new study by compromising grant funding, industry contracts, research staff support, and the general progress of science by UCSF Investigators.

The major source of delay in approval of Full Committee Review applications is the process of “returns”, wherein the CHR requests that the investigator make a change or modification to the proposal. Among initial CHR submissions, 75% are returned for modification by a CHR analyst because they are unacceptable for review by the Full Committee. On average, the investigator response to this initial return takes 16 days. When applications are returned to the investigator after a Full Committee Review, the average time for the investigator to respond is 21 days. Many applications have multiple returns that collectively increase the time required for CHR approval.

The aim of this project is to significantly decrease the duration of CHR approval time for Full Committee Review studies by reducing the number of applications returned to the investigator for revision. UCSF approval time for Full Committee Review is significantly slower than the national average; the national target is <42 days for the duration of the approval process. An intervention to minimize or eliminate returns could meet this national goal for CHR excellence.

Plan: We will work in collaboration with John Heldens, Director of the CHR, and CHR staff to develop effective strategies that decrease the number of Full Committee Review application returns. In 2011, the CHR conducted a review of 700 applications and identified several broad categories of common investigator errors. Using these data, we will focus on improving 2 areas that require frequent returns:

1. Administrative Errors: 50% of initial applications are either incomplete or missing required attachments. These returned applications result in significant delays because an incomplete application cannot be submitted to for Full Committee Review.

Improvement Plan: We will review 100 randomly selected applications returned for missing or incomplete components to identify the most common errors (e.g. missing consent form or incomplete investigator descriptions). We will then work closely with CHR staff and the iMedRIS administrator to develop interventions that minimize these administrative errors, such as programmed iMedRIS alerts that prompt the investigator to complete required sections and attach required documents. We will then use logistic regression models to evaluate the proportion of returned applications among 100 that use the new interventions compared with 100 applications submitted with the current iMedRIS format.

2. Content Errors: The second most common cause of returns is significant deficits in the content of the application such as an inadequate description of study procedures or the data safety and monitoring plan, or an incomplete explanation of the study population or recruitment procedures.

Improvement Plan: In collaboration with CHR staff, we will create a “Content Score”, a summary score that reflects the overall quality of the application in terms of key content areas that when insufficiently addressed put the application at a high risk for return. To validate the use of the Content Score, we will use linear regression models to determine if a poor Content Score is associated with slower time to CHR approval among 100 randomly selected applications. We will track these applications for the number of returns, length of time for investigators to resubmit after each return, and the total number of days before CHR approval.

If the Content Score proves to be predictive of time to CHR approval, we will use the Content Score to develop targeted interventions to minimize returns. Possible strategies might include notifying investigators with poor Content Scores that there is a 90% likelihood of a return and recommending further review of specific sections prior to Full Committee review, or incentivizing investigators to provide high quality applications by allowing those with favorable Content Scores to be prioritized for review.

The Content Score will also highlight commonly misunderstood sections or questions of the CHR application. This will facilitate the development of targeted changes in the iMedRIS application to improve investigator responses and subsequently reduce return rates. Possible iMedRIS changes include questions being re-phrased or re-formatted, showing quick links to examples of high quality responses, or including links to other sources of information and help. 

Criteria and Metrics for Success: We will evaluate the success of our project based on significant changes in the time for CHR approval of Full Committee Review applications. In the first 3 months after completing the interventions developed in this project, we will calculate the proportion of applications with administrative errors, the mean number of returns per application, and the total time for CHR approval of all Full Committee Review applications. We will then compare these outcomes to all Full Committee Review applications submitted in the 3 months prior to initiating the new interventions. Metrics of success will be:

  1. A decrease in the proportion of applications with administrative errors to <20%
  2. A 20% decrease in the proportion of applications returned for content errors
  3. A decrease in the total time for CHR approval to <43 days

Budget: We request $50,000 to complete this project. The project involves significant data collection and analysis and iMedRIS programming and testing. Funds will be used to support a programmer/analyst to store, clean, manage and analyze data. Additional funds will support 5% effort for the clinical investigators, 2 CHR analysts, the iMedRIS adminstrator, and the CHR Director. These key personnel will work together to develop and test effective interventions to decrease CHR approval time.

Collaborators: Vanessa Jacoby, MD, MAS will be the principal investigator for this project. Dr. Jacoby is an Assistant Professor in the Department of Obstetrics, Gynecology, and Reproductive Sciences with a clinical research program focused on surgical treatments of common gynecologic conditions, such as uterine fibroids. She has advanced training in clinical research methods and has conducted multiple studies requiring Full Committee Review. Dr. Amy Gelfand, MD is a Clinical Instructor in the Department of Pediatric Neurology with clinical and research expertise in the care of children with chronic headaches. Dr. Gelfand is currently leading a project to simplify the CHR application process for low risk chart review studies. She will apply her expertise and experience from this project to assist Dr. Jacoby in completing the current proposal. John Heldens is the Director of the CHR with a focus on decreasing the number of returns among CHR Full Committee Applications. Dr. Jacoby and Dr. Gelfand have collaborated with John Heldens, the director of the CHR, on previous projects and they will work closely with Mr. Heldens on the proposed project as well.

Comments

CHR approval can be a daunting hurdle requireing significant activation energy when initiating new studies. Efforts to improve and streamilne the process will help decrease lost productivity due to the types of friction described in this proposal. This proposal would have broad and wellcomed impact. 

Consent forms that are too complex or too technical and need to be edited for plainer language are also a source of delays in CHR approval.  

We agree that incorrect content in the consent form may result in a return of the application for edits and thus a delay in the approval time. Item #2 under our "Plan" should address this concern by incorporating the content and quality of the consent form into our proposed "Content Score".

I think this is a really great idea - I like the methodology you are proposing to first analyze a randomly selected set of applications to identify the most common administrative errors.  Just disseminating the list would be of benefit to investigators.  And the "Content Score" concept is very interesting.  In addition to determining whether or not the Content Score is correlated with time to approval, it would also be interesting to see whether a CHR staff intervention based on low Content Score was associated with a decreased time to approval compared to those without an intervention.

"50% of initial applications are either incomplete or missing required attachments."  Wow.  That's crazy.  It  was my understanding when IMedRIS was first rolled out that initial changes and upgrades would be to the CHR side, but then there would be a focus on end users.  I like the idea of reviewing a random sample of applications, but I'm wondering if some sort of survey of regular users asking their biggest frustrations might also yield interesting information.  I'm so glad you're including an IMedRIS programmer as part of the team, because I'm guessing simplifying the process on the user end (for example making consent forms easier to attach) could go a long way in improving approval time.

Thank you for this nice feedback on our project. We agree that CHR user input could improve our understanding of the high rate of returns. In developing this project, we did discuss including a survey of CHR users to identify commonly misunderstood parts of the application. However, we ultimately felt that this would increase the scope of the project somewhat beyond the budget and time limitations of this pilot proposal. In lieu of a user survey, we have described how we believe that the Content Score will identify commonly misunderstood sections or questions of the CHR application to allow us to develop targetted improvement in these areas. 

I think another aspect that should be looked at is the work load on current CHR analysts. I have seen same analysts assigned several studies at the same time, which delays the review process of submitted applications and consequently study approvals.  

That is another very good point. As part of a separate project to analyze our business processes, we will be reviewing how work is distributed among HRPP analysts. However, the success of this pilot proposal would mean HRPP staff would spend far less time on each application.

This is a great project idea. It might be helpful to perform a quick assessment of CHR's 'capacity' and current resource utilization.  The problem could reside in having higher demand (number of studies) than can be handeled by the current CHR structure.  Alternatively, this assessment could also reveal bottlenecks in the process as mentioned in the previous comments (i.e. high number of studies per analyst, IMEDris limitations).  Overall, the success of this project can have a great impact on the protocol approval process, and ultimately research as a whole across UCSF and affiliated institutions.

Thank you for your feedback and support. We decided to focus on reducing administrative and content errors because those have proven to be very difficult problems for us (HRPP) to solve on our own, and improvement in these areas would have a very positive and tangible impact on reducing effort for researchers, HRPP staff and IRB members. HRPP is currently undergoing a separate analysis of our business process, and we continue to look for additional ways to reduce the number of submissions.

This application addresses a very important issue, and in general the approach appears very sound.  However, there is a lack of detail concerning the specific methods used to determine outcomes, such as "comparing the initial return rate among 100 applications that use the new interventions compared with 100 applications that are submitted "  and"

  1. A decrease in the proportion of applications with administrative errors to <20%
  2. A 20% decrease in the proportion of applications returned for content errors
  3. A decrease in the total time for CHR approval to <43 days

I suggest that the applicant include a statistician in the application in order to

1) develop a statistical approach to determining outcomes  and

2) to explore use of a data mining or machine learning approach towards identifying features in applications which take longer  times for review.

We appreciate this feedback about the lack of detail in our statistical plan. To address this critique, we have edited the proposal to include further detail of the multivariable models we wil use for analysis as well as the approach to measuring the "metrics of success" that this reviewer outlines as #1-3.

The PI of this study has advanced training in biostatistics and a masters degree in clinical research. She has completed many studies with similar statistical approaches that we plan to utilize in this proposal without the support of a biostatistician. Therefore, we believe we will be able to complete the analysis without additional staff support from a biostatistician.

This is a great idea, and it would be beneficial for improving turn-around times for Expedited applications as well. 

 

You might consider using an A/B testing tool such as Google Website Optimizer or Optimizely to assess any changes to the iMedRIS interface. These tools make it very simple to run randomized, controlled web experiments. For example, to test an intervention to address the completion/attachment problem, you could setup an experiment where one third of users (or applications) use the current 'sign-off' page, one third see relevant warning messages before signing off, and one third use a generic, required checklist. The tools would randomize people or applications to the different versions and output a list of which users/applications received which version. Then you could use this list to analyze the various outcomes of interest across different versions. If you have enough volume, you can create multivariate experiments as well. 

 

Can you give more detail about the Content Score? Would this be some sort of automated keyword algorithm screening before the application reaches the analysts? If the Content Score were not automated, I'm not sure how it would save time. Analysts already return applications that are not sufficiently detailed. If an application passes analyst screening, the requests for more detail come from the full committee. That said, having links to high quality responses would be enormously helpful. 

 

Commenting is closed.

Search Engine to Directly Access EHRs without Data Warehouses or Loss of Unstructured Data

Proposal Status: 

Search Engine to Directly Access EHRs without Data Warehouses or Loss of Unstructured Data

 

Rationale
Research hospitals, including UCSF, spend millions of dollars a year moving data from EHRs (electronic health records) to alternate data warehouses for analytics. Consultants are hired to map data to and from warehouses. As a result of this time and labor-intensive process, a search query can take up to two months until completion. A query does not even produce a complete set of patients matching the search criteria because data stored in hospitals’ legacy systems, or entered as physician notes in EHRs, lack clinical context and/or code association. Consequently, cohort identification and statistical correlations are severely limited in the conduct of clinical and translational research. As the adoption of EHRs grows, healthcare organizations need analytic tools that can aggregate data from disparate sources so that they have a more complete and comprehensive view of individual patients and patient populations.
Plan
Massive Minable Medical Data (M3D) is being developed as a search engine for clinical researchers at UCSF to search and analyze information directly from EHRs, without the laborious, time-consuming, and expensive mapping of data to new data warehouses. It offers fast and accurate data retrieval from natural language queries posed by the researcher. Natural language processing allows M3D users to ask intelligent questions to discover correlations, e.g. "how many patients with heart disease were taking Vioxx?" Direct integration with EHRs will save UCSF data moving expenses, which can potentially reduce cost by an order of magnitude. The specific difficulty with large EHR vendors, such as Epic,  is handling a system of many thousand database tables. To address this, M3D’s has developed an algorithm that uses machine learning to identify which tables contain relevant data- even when the researcher doesn’t.
Metrics for Success
SUMMARY/MILESTONES
1. Identify high priority query types
2. Secure access to representative datasets for validation
3. Pilot study/ adoption by clinical researchers
4. Expansion of capacities for physicians
5. Pilot study/ adoption by physicians

Success is the widespread adoption of M3D for accessing the wealth of information contained in EHRs at UCSF. We are currently seeking feedback from the medical community, specifically clinical researchers, regarding the the highest priority types of data/queries. Validation of the search engine on these query types are prerequisites to a pilot study with active researchers. Access to representative datasets for development and validation of the search engine is a prerequisite to a pilot study with active researchers. Once the design can be refined to satisfaction of clinical researchers, the process will be repeated with a broader set of physicians at UCSF.
Cost
Initial costs will be focused on securing the proper legal standing to work with patient data on Epic systems through either professional legal counsel or Epic certification. Important, but technically simplistic aspects of the search engine, such as the user interface, will be contracted out as needed.
Collaboration
MEMBERS
Michael Sachs- UCSF BMS graduate student

Jingwei Zhang- UCSF/UCB BioEng graduate student

Sabrina Atienza- UCB CompSci

2013 George Ramonov- UCB CompSci 2013

ADVISOR
Anil Sethi- Founded Sequoia Software, current CEO Gliimpse, 20+ years experience in Healthcare IT.

Commenting is closed.

Developing a Policy Roadmap for Research Outputs

Proposal Status: 

Rationale:

The information age has brought about incredible shifts in the dissemination of knowledge, and academic research is no exception. Research stakeholders including funding agencies, journals, researchers, and research institutions have all recognized the potential of Open Access to accelerate the scientific enterprise. Recent stakeholder policy revisions related to the demand for Open Access have become a source of uncertainty to the research community both in terms of traditional (academic articles) and non-traditional (data, software, etc.) research outputs. The creation of a policy roadmap that compiles and clarifies funder, publisher, and institutional policies regarding access to research outputs will help researchers navigate the Open Knowledge landscape, supporting compliance and encouraging openness in research.

 

Plan:

1)   Identify any available resources that document research stakeholder policies regarding traditional and non-traditional research outputs. (For example the UK JISC JoRD Project is developing a central service on journal research data policies)

2)   Compile these resources in a web-based tool that allows researches to reconcile the complement of policies relevant to their work.

3)   Enhance the web resource with additional general information about licensing & copyright as they relate to research outputs, as well as international considerations.

 

Deliverables:

-       A database of verified resources directly accessing stakeholder policies regarding research outputs

-       Full text documentation of these policies

-       Webpage providing access to the database & associated policy text

 

Success Metrics:

Success will be determined by visitation to the website. Web tracking metrics will include number of visitors (return and unique) and length of time spent on the site. Further indications of success will include any contacts initiated via the website, indicating community engagement.

 

Approximate Cost:

The proposed work will cost approximately $25K and will consist of 6 weeks of effort for an Analyst to compile the policy information, 3 weeks of effort for a web developer to setup a basic webpage to display the information, and some modest funds for promotional efforts.

 

Collaborators:

Input for the proposed resource will be sought from personnel in the areas of research and research support, as well as policy. Council from relevant organizations, such as Creative Commons, will also be pursued.

 

 

 

 

 

Comments

This is a great idea, and very necessary!

 

A few ideas:

  • Given that we have easy programmatic access to researchers' publications (thanks to Profiles APIs) and potential access to databases of journal-specific policies, are there ways to offer custom advice to people?
  • If the primary success metric is how many people look at this, it would be useful to think about how to make this resource discoverable by end-users. For example, maybe work with the Profiles team to see if there are ways to send custom emails to targeted researchers, instead of waiting for them to access a static website.
  • Should someone from the library be a collaborator?

Thanks for the comment, to your points:

  • That is a very interesting idea! We would need to determine the point in the research lifecycle in which this information would be most useful / most likely to affect behavior (e.g. is it critical to know these policies prior to manuscript publication?)
  • Yes! Once the resource is established, outreach efforts should be pursued from all angles
  • Good point. The Library may even have the most appropriate analyst to do the policy compilation. We should consult with a variety of research support groups on campus in order to identify potential collaborators on this.

This is a very interesting idea, and something that would be very useful for researchers at UCSF and beyond. The Library is familiar with many of the institutional and governmental open access policy initiatives, and the copyright issues they bring up with regard to publishers. Many of these policies are also expanding to data management. This is definitely an area of interest to the Library.

This proposal identifies a "kink" in the otherwise smooth flow {joke} of developments in OpenAccess.  Clearly, an individual author/scientist finds it easier to "stay with the known" rather than dive into the "legal waters" of OpenAccess.  The compilation would be of great use, though my first impression is that the Creative Commons will be most important.  If other areas are discovered/compile, all the better.

 

Here is a problem I see, as a potential user:  "Full text" will be almost impossible to decipher with respect to the "effects" of a given action.  I would imagine that I would want to have "reviews/critiques" that use your database to show the consequences of a given action.  It might even be a FAQ: "What effect will it have to publish in OpenAccess, as compared with a journal that accepts the NIH publication policy?"  Answers to such questions, referecing the database would put the answers on a secure footing.

 

Another thing, that I didn't see mentioned is the issue of the regulations in DIFFERENT COUNTRIES.  While the majority of scientific publications come out of the U.S., many Journals are published in other countries, where the laws/rules may be different.  A complete compilation is too much for this proposal's resources, but perhaps a mention of extending the database at a future time to include more data (e.g., from other countries or non-English Journals) would show that your effort can be the start of something even more extensive, and valuable. 

Commenting is closed.

Completion of SID - A Database for Tracking of Spine Surgery Patient Interactions and Outcomes

Proposal Status: 

Rationale

Over the past five years, Dr. Shane Burch, a UCSF spine surgeon, has developed a relational Filemaker Pro database solution (SID) for tracking all patients treated at the UCSF Spine Center.  The system tracks patient reported outcomes at clinical visits for both surgical and non-surgical patients over time, using hundreds of variables to enable both prospective and retrospective research projects. In 2011 over 25,000 patient reported outcome surveys were collected and SID is currently being used in an ongoing double blind prospective randomized FDA trial being run at the spine center. To our knowledge this represents one of the best data collection tools with the highest collection rates on campus. The database solution also offers integration of administrative data (cost) to be linked to patient reported outcomes allowing for economic analyses to be performed like cost minimization. This tool has greatly reduced our outcomes error rate and has helped us increase our outcomes collection rate to ~98% of eligible patients.

 

While the database is already proving to be very useful, there is still a significant amount of development work required to make the database integrate efficiently into a clinic setting with minimal research support.  We propose to hire a FileMakerPro specialist to help us complete the development of this database which will allow for additional functionality and the expansion of the spine database to other UC campuses. 

 

While the structure we've developed is specifically for the collection of spine surgery outcomes, this model could easily be adapted to other disciplines.  The true value of this model is in the ability to tie together diagnoses, procedures, and outcomes and produce very simple queries that are easily understandable to lay personnel.   

 

Plan

Identify and hire a FileMakerPro consultant/developer to complete the following functions:

  1. Develop and improve screen layouts for the use of SID on both stationary and mobile devices to expand the collection of patient reported outcomes.
  2. Develop an import function to merge outcome data collected via REDCap.
  3. Develop additional reporting and export functions to meet departmental needs such as cost analysis reporting, quality assurance reporting and accurate readmission rate reporting.
  4. Integration of data from a less versatile database such that the data from as far back as 2002 can be utilized through SID.
  5. Functional improvement of tracking patients for prospective studies through expansion of the “prospective study” feature:
    1. Patient preferences for completing outcomes (e.g., email, iPad, paper)
    2. Notifications to research staff triggering patient follow-up (collecting outcomes on patients who don’t return to clinic) to reduce the ‘lost to follow up’ error rate 
    3. Improve the reporting of complications to meet FDA or other societal requirements
  6. Roll out to other UC spine groups.
  7. Roll out to other UC non-spine groups.

 

Criteria and Metrics for Success

  1. Completion of all plans by 12/31/13 to include a single database containing all surgical and outcomes data from prior databases.
  2. Effortless “push of a button” function to import data from REDCap and queries to produce data reports for: outcomes by diagnoses, outcomes by procedure, cost, complication and follow up reporting.
  3. Distribution of SID as a turnkey solution for other spine clinics throughout the UC system.

 

Approximate Cost and Very Brief Justification

Total Budget: $49,500

We estimate that this project can be finished by a developer in 300 hours at a rate of $165/hour.

 

Collaborators

Dr. Shane Burch – Assistant Professor in Residence, UCSF Department of Orthopaedic Surgery, is an orthopedic surgeon who specializes in the treatment of spinal deformity, degenerative spine conditions and cancer involving the spine.

Linda Racine – Staff Research Associate II, UCSF Department of Orthopaedic Surgery, oversees the collection of outcomes and the management of multiple departmental databases.

Comments

Though interesting, this proposal is focused on advancing a particular area of research. To be eligible projects need to show generalizability and scalability in advancing the conduct of translational research.You're welcome to amend to address these issues or decide if its a match for this funding mechanism

The beauty of this type of database is its ease of queryability and its potential portability to other disciplines.  I've now stressed those in my proposal.

This is an outstanding proposal. One possibility to consider is to transition to a cloud-based data base, rather than continuing with FilemakerPro.  A major advantage of this is that if your data base is located in the cloud it would be much more accessible to other investigators at UCSF and elsewhere.  UCSF has been using cloud based databases, particularly SalesForce based databases. Using a cloud based HIPAA compliant database would be useful when it comes time to distribute to other clinics in the UC system. Have you explored the possibility of transitioning to a SalesForce based system rather than continuing with a FileMakerPro database? 

Commenting is closed.

Partnering with Patients: Novel Data Sources for Comparative Effectiveness Research

Proposal Status: 

Rationale

The Randomized Controlled Trial (RCT) has long been the gold standard for proving efficacy.  However, RCTs suffer from lack of generalizability to the broad population not merely because of strict patient inclusion/exclusion criteria, but because patients in these trials generally receive closer care than patients in the “real world.”  Observational and Phase IV studies were created to capture additional data in “real world” populations; however, they were not designed to elucidate comparative effectiveness of different treatments.  As a result, there have been limited tools to conduct medium to long-term comparative effectiveness research (CER) in real populations. However, with the concurrent evolution of electronic health tools and the concept of Patient Reported Outcomes (PRO) espoused by such organizations as the NIH and the Patient Centered Outcomes Research Institute, unprecedented opportunities in CER have emerged.  Orthopaedic Surgery, including Spine surgery and Arthroplasty represent compelling arenas for implementing pilot research modalities for several reasons.  First, a significant care gap continues to exist following elective orthopaedic interventions.  For example, in spine surgery,laminectomy and spinal fusion is the procedure with the highest costs associated with potentially preventable readmission, and among the highest rates of readmission.1 The Institute of Medicine in a 2009 report, targeted spine surgery as a top priority for CER.2,3  Readmission for arthroplasty is also highly variable between and within centers, and an evidence-based approach to reducing readmissions would have significant value in improving quality of care for elective joint replacement procedures. Despite this spotlight, there remain no widely accepted modalities for developing or implementing PRO measures in CER.4   The purpose of this proposal is to implement a mechanism for structured follow-up of patients after discharge to capture data, and detect potential complications early, at a stage that may permit treatment without readmission.

 

UCSF’s Department of Orthopedic Surgery has made significant strides in advancing the quality of patient care and safety following spinal surgery and arthroplasty, receiving the 2012 designations as a Center of Excellence for both services.  However, we continue to seek opportunities to innovate in this area and to become a national model of excellence.  The department has identified an innovative technology that can be scaled and generalized to all surgical specialties, that can open up a new arena in clinical research, and that is “shovel ready,” already in clinical use in several Bay Area orthopedic practices.  The tool captures rich data on signs/symptoms, health outcomes, and quality of life by engaging patients post-discharge, and allowing them to participate in their own outcome reporting. The specific aim of this proposal is to implement a pilot project that extends care beyond discharge by engaging patients in self-reporting of their health status at regular intervals during their own recovery.  Information from the resulting dataset may identify early signs of complication that will enable treatment in an outpatient setting rather than as a readmission.  The infrastructure for patient follow-up will begin with spine and arthroplasty services, and extend to surgical procedures throughout the UCSF campus.

 

Plan

We propose a pilot study using a novel HIPAA compliant SaaS platform called HealthLoop that is presently being piloted for clinical purposes by a large health plan.  Through a library of configurable electronic follow up protocols, orthopaedic surgery patients will be sent recurring automated email check-ins from their physicians after discharge with structured queries pertaining to their surgery.  A research database and a clinical dashboard will capture structured signs/symptoms, health outcomes, functional status, and impending complications using validated PRO tools. These will enable researchers to analyze a continuum of data throughout recovery and clinicians to intervene at any sign of impending complication.

 

Criteria and Metrics for Success

Metrics for success will include: Identification of practice patterns and/or follow up regimens that lead to superior outcomes; Reduction in complication rates; Reduction in ED visit and readmission rates; Reduction in total costs; Improved functional outcomes.

 

Approximate Cost and Justification

$50,000 is required for UC customization of clinical templates, provider training, implementation of the HealthLoop platform, execution of the pilot, as well as data analysis and reporting.

 

Collaborators

University of California Spine Surgery Consortium

Clinical and research staff at HealthLoop (Mountainview, CA) 

                                                                       References

  1. Qasim M, Andrews RM. 2009. HCUP Statistical Brief #142. Sept 2012. AHRQ, Rockville, MD. http://www.hcup-us.ahrq.gov/reports/statbriefs/sb142.pdf.
  2. IOM (Institute of Medicine). Initial National Priorities for Comparative Effectiveness Research. Washington, DC: The National Academies Press. 2009.
  3. Numerof, Rita. Comparative Effectiveness Research in Spine Care in Defining the Value of Spine Care, Jeffrey A. Rihn, M.D., Alexander R. Vaccaro M.D., Ph.D., Todd J. Albert, M.D. and David B. Nash, M.D., editors, 2012.
  4. Basch et al, JCO December 1, 2012 vol. 30 no. 34 4249-4255.

Comments

Though interesting, this proposal is focused on advancing a particular area of research. To be eligible projects need to show generalizability and scalability in advancing the conduct of translational research. You're welcome to amend the proposal to address how the solution is generalizable or decide its a mismatch for the funding mechanism.

We have amended the proposal for review.

Commenting is closed.

Assessment of Participant Health Literacy by Clinical Research Coordinators

Proposal Status: 

Rationale. Successful recruitment and retention of ethnically diverse research participants in clinical studies depends heavily on the comprehension, needs, and preferences of potential participants.  A critical influence on these outcomes is health literacy, the degree to which individuals have the capacity to obtain, process and understand basic health information and services needed to make appropriate health decisions.[i]  Since 30% of US adults have only basic health literacy,[ii] and Hispanic and African Americans are disproportionately concentrated among those of low health literacy,[iii] representation of these populations on clinical studies depends in part on the extent to which clinic and research staff can recognize patients of low or moderate literacy and communicate complex concepts to them in accessible and relevant terms. Tools that measure health literacy (commonly written questions that assess reading ability) have been criticized when used in clinical settings as alienating and stigmatizing to low literacy patients.[iv]  A recent review of health literacy research by the US Agency for Healthcare Research and Quality concluded that development of measures for spoken health literacy is a high priority.[v]

The proposed pilot study builds on a recently completed NCI RO1, Increasing Participation in Cancer Clinical Trials (2007-2012), that was a collaboration between the Kaiser Permanente (KP) Division of Research (DOR, C. Somkin, PI) and UCSF (R. Pasick, Co-I).  From in-depth analyses of 38 recorded nurse-patient clinical trial conversations, we elucidated 6 dimensions of patient utterances that we call “clinical trial health literacy” (CTHL), conversational indicators of a patient’s ability to understand basic concepts of trial participation: grammar, use of medical terms, knowledge of diagnosis and treatment, logic, initial clinical trial understanding, information seeking.  Our data show that these are closely inter-related and together serve as a strong indicator of health literacy.  Our long-term research plan includes development of a protocol for real-time assessment of these dimensions by Clinical Research Coordinators (CRC) and for corresponding messages tailored to high, medium, and low CTHL. This will be followed by a large-scale KP- and UCSF-based mixed methods study of protocol impact for participants (comprehension, satisfaction, needs fulfillment, decisional conflict), for CRCs (satisfaction, perceived efficacy), and on rate of clinical research study participation.

Plan. The purpose of this RAP grant is to conduct a feasibility test of CRC assessment of CTHL dimensions in the course of clinical research study recruitment conversations, and to further refine the dimensions for subsequent validation analyses.  Our specific aims are to: 1. establish a partnership between UCSF- CTSI Community Engagement Program faculty (Pasick), DOR (Somkin), and the UCSF CTSI Participant Recruitment Services (PRS, Nasser); 2. refine and pilot-test a CTHL screening tool for (a) ease of use by Clinical Research Coordinators and (b) reliability in recruitment attempts with 25 UCSF patients; 3. develop and pre-test a training protocol on CTHL assessment as one component of a new CRS curriculum on health literacy with 10-15 CRCs enrolled in the CTSI-initiated campus-wide CRC training program. Two focus groups will be conducted with CRCs at the outset to inform design of the feasibility test.  CRCs conducting the health literacy assessments will be interviewed following each encounter and patients will be queried briefly afterward as well.  

Criteria & Metrics for Success.  Long-term measures of success will include sustained collaboration among the above partners leading to incorporation of pilot study products in ongoing UCSF and KP CRC training, acquisition of NIH funding, and increased rates of participation in studies among those of low to moderate CTHL.  Intermediate measures are demonstration of the feasibility of CRC assessments as indicated by completion of 25 audio-recorded patient encounters that produce high ratings of satisfaction and comprehension by patients and of satisfaction by CRCs; a high degree of inter-rater reliability on encounter recordings rated by multiple CRCs; and training protocol pre-test results indicating that CRCs find the information and strategies useful, new for them, and easy to adopt.  

Approximate Cost/Brief Justification.  The project will be led by Dr. Pasick (5%) and Ms. Nasser (contributed, 5%) with project coordination by Ms. Allen (recipient of NCI Diversity Supplement to conduct the above NCI-funded Kaiser-based research, now a UCSF Research Analyst, 30%).  Dr. Somkin will represent KP needs and interests as a consultant ($5,000).  Patient incentives for study participation will be provided at $35 each ($875). PRS staff will devote 340 hours ($13,600) for focus groups, interviews, audio tape ratings, and training. Total cost: $46,336.  

Collaborators.  This study is a collaboration among Dr. Rena Pasick (Professor, Faculty of CTSI Community Engagement Program, with 25 years of NIH-funded research on cancer disparities and communication across cultures, including studies that involved training of service staff on health literacy); Ms. Nasser (Senior Director, CTSI Clinical Research Services Director); and Dr. Somkin (DOR, Research Scientist).

 

 

 



[i] Institute of Medicine Committee on Health Literacy. (2004). Health Literacy: A Prescription to End Confusion. L. Nielsen-Bohlman, A.M. Panzer, & D.A. Kindig (Eds). Washington, DC: The National Academies Press

[ii] Paasche-Orlow M, Parker, RM, Gazmararian, JA, Nielsen-Bohlman, LT, & Rudd, RR. (2005). The prevalence of limited health literacy. JGIM, 20, 175-184.

[iii] National Center for Education Statistics. National Assessment of Adult Literacy. Key Findings.  http://nces.ed.gov/NAAL/kf_dem_race.asp.  Accessed February 25, 2013

[iv] IOM (Institute of Medicine). 2009. Measures of Health Literacy: Workshop Summary. Washington, DC: The National Academies Press.

[v] Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Viera A, Crotty K, Holland A, Brasure M, Lohr KN, Harden E, Tant E, Wallace I, Viswanathan M. Health Literacy Interventions and Outcomes: An Updated Systematic Review. Evidence Report/Technology Assesment No. 199. (Prepared by RTI International–University of North Carolina Evidence-based Practice Center under contract No. 290-2007-10056-I. AHRQ Publication Number 11-E006. Rockville, MD. Agency for Healthcare Research and Quality. March 2011.

Comments

Given the known worse health outcomes among those with limited health literacy, it is critical to increase participation of limited health literacy populations in clinical trials. This formative work is a necessary step in that direction.

This is an important area of recruitment that is too often neglected. Staff training in ethics and literacy related to recruitment and understanding of research is critical. This proposal could fit well into the planned campus-wide CRC training program and could further impact recruitment rates through application of principles.

Commenting is closed.

Governance of Extant UC Biorepositories

Proposal Status: 

Rationale:  Each UC Biomedical campus contains tens to hundreds of biorepositories. These operations collect human biological samples (tissues and fluids) and associated data for use in research. UC biobanks traditionally have established their own governance structure, which includes rules for accessing, storing and sharing samples/data, including informed consent practices.  Governance, a complicated process, has myriad ethical implications including risk to individuals, identifiability, and data sharing. At UC and elsewhere, archived samples were not collected under current/emerging standards for informed consent. Decisions need to be made about the future appropriate use of these samples and data, especially in light of recent concerns about identifiability of “anonymous” genomic samples. Currently, there is little information about community perspectives on appropriate governance of extant biorepositories within UC. [Collections include samples that were initially collected as part of clinical care as well as collections done under research protocols.]

 

 Plan

 

We will leverage an existing NIH-funded study, EngageUC; this study will develop an ethical, efficient, and sustainable system for biorespository research across the UC system. EngageUC includes a robust community engagement (CE) component in which community members who represent the diversity of California will be brought together to be educated about biobanking and to provide informed viewpoints on optimal consenting techniques. This input will then be used to inform a clinical trial of consenting mechanisms to develop an evidence base for prospective collection of biorepository samples. However, EngageUC is focused specifically on establishing policies for informed consent and governance for NEW collections moving forward, and does not examine how samples and data in existing biorespositories – including samples with varying forms of informed consent – should be governed.

 

This proposal leverages the EngageUC CE activities to develop and refine approaches for managing governance of extant biorepositories in the UC system. This project will: (a) extend planned CE activities by including an array of additional stakeholder groups who bring valuable insights on extant biorepositories; (b) work with community stakeholders, biorepository researchers, and UC institutional officials to develop consistent and comprehensive approaches for managing governance of UC’s extant biobanks; (c) and translate these approaches into policies that are feasible and acceptable for governance of existing biorepositories.

 

Criteria and metrics for success

 

Short term criteria for success will include conversations with diverse stakeholder groups to elicit perspectives about governance of extant biobanks and the recommendation of alternative governance approaches for adoption by the UC BRAID consortium including  how specimens will be stored, labeled, accessed, and shared; whether and how current perspectives on consent will be incorporated.  Long term we anticipate translation of approaches into UC policy.

 

Approximate cost and justification

 

The primary costs of the study will be salary support and administrative costs.  We require 15% effort for the program manager and 10% effort for one faculty member as well as RA support.  Administrative costs will be those associated with setting up meetings between community groups, researchers, and institutional officials.  We are requesting $50,000 total.

 

Collaborators

 

EngageUC team leaders bring expertise in biorepository research and management, informed consent research, community engagement, and ethical dimensions of informed consent. The team also has experience in mixed-methods research and the translation of research into policy. Jen Hult, MPH, is Senior Program Manager of EngageUC. Elizabeth Boyd, PhD, is Associate Vice Chancellor, Ethics and Compliance at UCSF and Program Director of the CTSI’s Regulatory Knowledge and Support program. Daniel Dohan, PhD, is Associate Professor of Health Policy and Social Medicine at the UCSF Institute for Health Policy Studies (IHPS), where he also serves as Associate Director for Training and Development. Sarah Dry, MD, is Associate Professor and Associate Chair for Research Services in the UCLA Department of Pathology. Arleen Brown, MD, PhD, is Associate Professor of Internal Medicine and Health Services Research and Leader of the UCLA CTSI Community Engagement and Research Program. Barbara Koenig, PhD, is Professor of Social and Behavioral Sciences at UCSF. An internationally-renown bioethicist, Dr. Koenig brings extensive experience in research on biobanking and community engagement and will be the primary faculty member involved in this project.

Comments

Hi Jennifer - I have also submitted a proposal that requires info gathering on the large number of biobanks at UCSF.  Let's see if there are some synergies for our two proposals.

Hi Julie - yes, we should talk!  I would also be interested in seeing the results of the Tissue Task Force and the  tissue bank audit if it's possible to share those.

Commenting is closed.

Information Interface for Patients, Clinicians and Researchers

Proposal Status: 

Rationale:Taking findings from basic research to practical applications that enhance human health and well-being is the signification of translational research. Improvement in human health and well-being, however, does not necessarily involve the cure of a disease. Sometimes mere information transfer might help patients substantially in dealing with their medical condition. This is particularly true for patients with family history of yet unknown or contradictory diagnoses – in other words for patients with rare hereditary diseases. This point was emphasized very recently by Erika C. Hayden in Nature [1]. Providing information is the foundation of molecular genetic diagnostics and genetic counseling, which should readily be available for any inherited disease, even if no cure will be available in the foreseeable future. The provided information can answer basic questions of the patient, such as What do I have? Why my family? Is there a risk for my children? Is there anything I can do about it? And this will assist the patient in understanding their medical condition.

COL4A1 (OMIM: *120130) and COL4A2 (OMIM: *12090) encode for extra cellular matrix proteins that constitute basement membranes. Mutations in COL4A1 and COL4A2 cause multi–system disorders including porencephaly, cerebral small vessel disease with hemorrhage, Axenfeld-Rieger anomaly with glaucoma, and variable muscular dystrophy including muscle-eye-brain disease. To date, more than 50 different pathogenic sequence variances have been described in highly penetrant multi-system disorders. For ICH – the only disease so far that has been systematically screened for COL4A1 and COL4A2 sequence variants – we estimate that mutations in COL4A1 and COL4A2 may cause up to 10% of all cases of spontaneous ICH. The Gould lab identified the first COL4A1 mutations in mice and humans and recently published a comprehensive review on the topic. For these and other reasons, the Gould lab receives inquiries from researchers, clinicians and patients throughout the world interested in collaborations, clinical advice or participation in research. We recognize that many other labs at UCSF and around the world face these same types of requests. We propose that developing a centralized information management system (IMS) to promote and streamline interactions and communication between these three stakeholders will be an efficient and effective paradigm to enhance translational research.

Plan:

Based up on the platform developed by the Leiden Open Variation Database (LOVD) we will generate a centralized information management system (IMS) that streamlines interchange of information between patients, clinicians, and researchers. The IMS will be tripartite in order to provide the relevant information to the appropriate target audience. We will develop the platform using COL4A1 and COL4A2 as a scalable model for any number of other groups.

  1. Patients will find a comprehensive overview on the different disorders associated with COL4A1 and COL4A2, as well as a platform to connect to medical doctors, researchers, and genetic counselors especially trained on these medical conditions.
  2. Medical doctors and genetic counselors will find a locus-specific database that collects all sequence variations in COL4A1 and COL4A2 with links to the according reference of their initial description, the latest literature on COL4A1 and COL4A2, as well as information on ongoing patient studies.
  3. Researchers will maintain the IMS and thereby find a useful research dataset for molecular diagnosis, large-scale mutation statistics, and the determination of genotype-phenotype correlations. Research will also benefit from a collaborations platform, which promotes discussion in the field. They will also identify patients willing to donate biological material for further investigations on multi-system disorders.

 

Criteria and Metrics for Success:

Immediate goals:

  1. Set-up a locus-specific database for COL4A1 and COL4A2 based on the format used for the Leiden Open Variation Database (June 1, 2013).
  2. Set-up a literature database on COL4A1 and COL4A2 based on the Mendeley open source reference manager (June 1, 2013).
  3. Write a comprehensive overview on the different disorders associated withCOL4A1 and COL4A2 and update all according NCBI datasets (July 1, 2013).
  4. Collect information on MDs and genetic counselors that are especially trained on medical conditions associated with COL4A1 and COL4A2 based on information provided by experts and databases, such as GeneTests(July 1, 2013).
  5. Create an interactive web-interface that links all the different resources by July 1, 2013.
  6. Go live by August 1, 2013.

 

Mid-term goals:

  1. This IMS will increase clinicians’ awareness of COL4A1 and COL4A2 as one possible cause for multi-system disorders and they will find necessary resources and information to help make decisions for their patients.
  2. Interested patients with multi-system disorders can arrange to donate biological samples, such as DNA or skin biopsies, to researchers listed on the IMS in order to promote research on multi-system disorders.
  3. The IMS can be a model for other inherited diseases and may lay the seed for an extended IMS that comprises also other genes and diseases.

 

Long-term goals:

  1. Once the platform is established we will use it as the model and continue to build this resource for other inherited diseases of the extracellular matrix.
  2. Establish this platform as a template for other UCSF researchers and clinicians to interface with each other and with patients to promote information transfer and patient recruitment.
  3. In summary, this proposal will improve the conduct of research on a devastating disease here at UCSF and worldwide.

 

Total Budget: $22,000

Cost estimations:

Summer student: $4000/mo for 3 months = $12,000

Web Developer: $50/hr for 1 month = $8,000

Postdoctoral Fellow to oversee project 15% effort for 3 months = $2,000

 

References:

1.            Hayden EC: Data barriers limit genetic diagnosis. Nature 2013, 494(7436):156-157.

Comments

Hi,

I am a librarian at UCSF and wondered if we could provide any help to you in developing a search methodology to insure that the latest material about your topic is discovered and collected for use in your IMS.

Let me know if we can be of any assistance.

Whit

Whit, we greatly appreciate any input. At present, several members of the lab use appropriate search terms with searches such as PubCrawler or Journal Lab to identify newly available information. However, we are always open to other search methodologies that may be useful. Aims of our proposal also include the implementation of a shared publications database using MENDELEY and the optimization of the “online presence” of the relevant diseases.

This is a very interesting idea that addresses a vitally important need, especially given how close we are to interrogating patients' genomes as a routine part of clinical care.  Could you explain a bit more about how you see patients using this platform?  Will MDs and genetic counselors volunteer their time to answer questions raised by patients and family members?  Will patients have the opportunity to submit samples for genomic analysis?  And if they already have their exome or genome in hand, will they be able to get help interpreting it?

 Thank you Dan,

We essentially aim to curate, streamline, and optimize information on the relevant diseases that is either missing, or already present from disparate sources (including OMIM, Wikipedia, GeneReviews, GeneCards, and GeneTests) and provide a central gateway to access this information to patients, physicians and researchers. By this, search engines will score the relevant web pages higher and patients can access the relevant information more easily. Patients can link to GeneTests where they may locate centers that already perform genetic testing and provide genetic counseling (and these would be updated as others come on board). Patients interested in submitting samples for research purposes would have the opportunity to link out to our lab page (or others) that accept samples for research purposes. Presently it is impractical to offer interpretation of individuals’ genome or exome information in such an interface as this but one may imagine this being a place to find such information in the future.

Interesting proposal.  This is one though that it will be key to understand what the landscape looks like.  For example, some kind of service such as the one you propose is part of 23andme's offerings - it needs to be, since they are in the business of providing people with genomic information.  They keep their customers coming back by providing all kinds of nuanced information, new relevant publications, expert clinical and scientific opinion, etc.  They are of course a private company and I couldn't find any indication that they provide this generally to people.  I agree that a more public service would be valuable.

 

But the other piece that needs to be addressed is just how this effort scales. At this time it reads very effort-heavy and is hard to imagine it scaling. It is a pilot, so in this first phase may be fine, but it would be best if you could share how you're thinking a scaled version might be implemented.  So thinking through answers to Dan's questions will help.  In addition, will be useful to review (just maybe list) some of the existing services providing at least components of the initiative you propose.  Then you can describe how you'd stitch together already existing resources and add the components that are missing.  For example, there are resources such as this one  http://www.ncbi.nlm.nih.gov/books/NBK7046/  - could you re-use content? perhaps a partnership strategy (don't have to implement now, but would plan for it for the scaled version). Other groups to think about include 'patients like me' (I don't know if they do anything in the disease risk space), or even large foundations.   

Thank you for the comment. We would hope that this becomes a kind of portal for all to use and not only those who pay for a commercial service. You raised very good points about scalability. We propose that our effort will work as an example. As part of the scalability we will provide a summary to CTSI and other members of the UCSF campus about our experience and a step-by-step protocol on how to improve the “digital presence” of disease–based research using similar methods. Information would include a checklist of suggested hosted databases to update and link. If there is interest, we can share also our knowledge on the LOVD and as a long-term goal if many other labs want to put their data also into this repository for genetic variants, one could spearhead a common LOVD database for UCSF.

This is an interesting idea.  I agree that it may be difficult to scale.  My main concern is that much of this information is redundant, it is simply scattered across the internet right now and needs to be centralized and augmented with contact info for the relevant researchers/clinicians.  Can algorithms be devised to eliminate much of the effort for you?  If so, how to possibly do it all, disease by disease and mutation by mutation?  OMIM has kind of done this although it is written by scientists for scientists, with no possibility for interaction.  Will this be directed towards patients?  If so, can it be built on top of something like OMIM?  And how to exclude people with personal genomes/exomes that have polymorphisms which are not causative for a given gene from clogging up the system?

One of the points of doing this is for the very purpose of developing and maintaining a portal for information that is currently scattered across the internet and to reduce any redundancy by tying it all together. The goal is to improve knowledge and information transfer by making it easier to access. This will be accomplished in part by maintaining an updated portal. An important part of this, of course, is also maintaining links to other sources of user generated content (e.g. OMIM, WikiGenes). Part of this effort will also be to update content on other sites as the need is identified. If there are algorithms that can automate this, we are happy to employ them - although, we presume the initial set up and curation would fall to the portal hosts. (please see the next response regarding your point of scalability).

A centralized information management service (IMS) would be a useful service for patients and physicians.  It would be useful to know what the frequency of symptomatic mutations are in this gene to get an idea of the clinical relevance of identifying allelic variants.  If the frequency of allelic variants in an asymptomatic population is low, and clinical presentation is common for any one manifestation then screening larger populations may be useful, and informing patients with allelic variants may have value.  It is difficult to estimate the value of an IMS in the absence of more information on allelic variant frequency and penetration.

Thank you for the comment Sigurd, we agree. Presently, this information is available but without being in a centrally located and curated location. For example, we recently sequenced 96 patients with intracerebral hemorrhages for both COL4A1 and COL4A2 but these data are found in supplemental data of the manuscript that even collaborators overlook. This type of information can all be easily accessible in one web interface using the format of the Leiden Open Variant Database (LOVD) as our platform.

Commenting is closed.

Establishment of centalized freezer surveillance system to increase protection of human biospecimens

Proposal Status: 

Rationale: There are currently about 1000 mechanical – 80 degree Celsius freezers in use at UCSF. The majority of these freezers are used to store biomedical specimens for basic research, translational research, clinical trials, and prospective biobanking efforts.       

To protect clinical research material stored in these freezers the performance of the freezers has to be monitored 24/7, and alarm information systems need to be in place informing owners about freezer malfunctions. Currently a central freezer monitoring service is offered through the Alarm Management program of the UCSF police department. The system is based on connecting freezers via a phone line to the police department. This monitoring model has certain disadvantages, such as the fact that it relies on the temperature gauge of the freezer itself (instead of an independent measurement) and the system only triggers an alarm without generating ongoing temperature/performance reports.  

Protection of stored research biospecimens and human samples derived from clinical trials will be increased if a UCSF-wide monitoring system is made available to laboratories and investigators providing alarm surveillance services including constant temperature and energy reporting.  Such a system will also guide preventative maintenance decisions by recording changes in freezer energy consumption over time, further enhancing the protection of stored material by reducing the instances of unexpected freezer malfunctions, and extending the life of the freezer asset.

 

Plan: Pilot implementation and testing of a remote freezer monitoring/surveillance system enabling investigators/laboratories to connect their freezers independent of campus location to an alarm/monitoring system that provides instant emergency alarm message functions and constant reports about temperature and energy consumption.

Criteria and metrics for success:

  • Usability at all campus locations with Wifi access
  • Liability of the system in recognizing alarm situations and generating alarm messages
  • Liability in generation of data reporting (energy consumption) as guide for required preventative maintenance (further potential for energy-saving maintenance and repairs)

Approximate costs: Currently a system providing such services (klatu networks, TRAXX) is tested by the Facilities Management Group at UC San Diego. This application intends to leverage the ongoing UC San Diego to test whether the system is be suitable for UCSF and also to analyze whether this system could be recommend as standard to be used by all UCs. Estimated cost of establishing a pilot study at UCSF including surveillance and monitoring of 50 freezers are $50,000.

Additionally, implementing a technology that reports out on freezer energy consumption and that triggers maintenance and repairs could have a huge financial impact to UCSF’s bottom line (each low temperature freezer uses about as much energy in a year as a typical house) and part of these costs may be reimbursable in the long run if the system is approved by the California Public Utilities Commission as a viable energy efficiency measure under the UC/CSU/IOU Energy Efficiency Partnership program. The technology is currently under review by the SDG&E Emerging Technologies group, using data from UC San Diego as well as several other SDG&E customers.

 

Collaborators: Gail Lee (UCSF Facilities Services), Munn Maric (UCSF Facilities Services), Anna Levitt (UC San Diego Facilities Services), Jim Sobczyk (UCSF Campus Life Services “Freezer Farm”), Julie Auger (UCSF Research Resource Program), Britt-Marie Ljung (UCSF Pathology and Cancer Center Tissue Core), Hubert Stoppler (UCSF Cancer Center Tissue Core)

Comments

Implementing a system to ensure the integrity of the -80 freezers across the ever growing UCSF campus recognizes the diverse interests of the various banks across the city of SF. This would be an important tool that many investigators will ultimately benefit from.  Thanks for applying for this resource.

This is definitely needed for the proper monitoring of biospecimens.  A wireless system would be a vast improvement of the current system. 

 

This woud be a good improvement on the current monitoring system.  Costing (start up and long term) would need to be carefully worked out.

Great idea!

great plan! we are excited to hear how the pilot study goes.

Hubert, Do you forsee any training needs to be built in for SRA or lab tech staff with this system? I ask as many techs manage freezer alarms now and I am curious about how we will communicate a possible change effectively to our -80 owners and staff?

Courtney, I believe that training time for individual labs/biorepository to use the system will be minimal and would not needed to be calculated as an additonal expense. I agree, if something like this will be established, it would be very important to make every potential users aware that they would be able to join an existing service offer.

UCSF financial, personnel and resource investiment into the collection of patient biospecimens for research has been substantial, and has allowed for myriad discoveries in many fields. However these resources are vulnerable to complete destruction if not stored and monitored properly.  The proposal put forth by Dr. Stoppler has the potential to significantly strengthen our ability to monitor and protect these resources during storage.  Therefore I strongly support this proposal.

I agree this could be a useful addition to and/or replacement for existing freezer monitoring systems, and the possibility of standardizing this across the multitude of UCSF tissue repositories could improve the security of these precious research resources. This pilot study will offer the opportunity to evaluate the reliability and scalability of this approach to distributed freezer monitoring via wifi. As a point of comparison, the adult hematologic malignancies tissue bank is currently using a wireless freezer alarm system from Accsense. The Accsense system has proven to be somewhat frustrating for a variety of reasons, and it is quite costly. The per-freezer cost for using the Accsense system is ~$1000-2500 depending on several factors, including the proximity of the freezers to one another, and to a required internet-attached gateway, so this proposal by Hubert is definitely price-competitive.

This would also significantly improve our ability to demonstrate reliable temporary storage capacity for clinical trial specimens for industry-sponsored trials.  The lack of such monitoring and surveillance on our freezers has been a limitation in some of our site qualification assessments by potential industry sponsors. 

This is a great idea at a number of levels (sample protection, energy use, cost)...

This freezer monitoring system would be a huge asset to anyone who needs to ensure specimen integrity at UCSF.

This idea is fantastic, in fact it seems downright bizarre that it does not already exist! Our lab stores numerous human samples and we will definitely appreciate the implementation of this freezer system.

Indeed seems like this should have been there already years ago. great idea

This sounds mission critical for the long term succes of our clinical/translational research programs. Creating such a centralized system sounds like it could be much more cost effective than the dept by dept or site by site ad hoc systems currently in place.  It would be a great opportunit to implement common best practices or standards as well.  As departments and programs continue to be geographically spread out, having a centralized group and system that assists with freezer monitoring regardless of site seems all the more important.  Great idea.

Great idea! Much needed.

It would be great to have a thorough, reliable, and cost-effective system to monitor our freezers from any computer. Currently, establishing a small system in isolation or connecting to the UCPD-based system is slow and expensive. It is essential that we protect the biomedical specimens that study participants have provided.

Commenting is closed.

Evaluation of Methods for Bioinformatics Tools Training

Proposal Status: 

Rationale – Background and training in bioinformatics tools is required for the research community at UCSF to remain at the forefront of biomedical research and successfully compete for funding.  Bioinformatics tools help translate our collective molecular understanding of disease into actionable insights and life-improving patient care innovations.  Application of bioinformatics knowledge can minimize persistent barriers to progress in the translational research workflow by providing a bridge between the domain expertise of the experimentalist and the world of computationally driven, information-based research methods (see Appendix Fig 1). 

Currently at UCSF, training on bioinformatics tools is decentralized.  Services, databases, and courses are hosted by Core Labs, the UCSF Library, and other departments.  Centers of data generation are not always integrated with the tools needed to gain insight from the data or place data in a disease-context. Navigating lists of bioinformatics tools can be a daunting task for researchers with no background in bioinformatics (see Appendix for current lists). 

This arrangement carries numerous risks:

  • Insufficient information about the best tools and platforms to address a particular research question.
  • Inability to accurately gauge the true investment needed in both time and money to gain full value from an experiment.
  • Poor return on investment from expensive experiments, and missed opportunities due to lack of awareness about tools available to translate raw data into findings and testable hypotheses. 

In keeping with the UCSF Library’s commitment to support the continuing educational needs of information resource users, and extend services to a wider audience of researchers, we propose a pilot project to evaluate web-based methods for bioinformatics training.  The UCSF Library will provide access to a centralized, web-based resource for training on well-adopted, authoritative bioinformatics resources by hosting the OpenHelix collection of bioinformatics tutorials on its website.  This new service would cross-reference with related services (Core Labs, Cores Search, MyCORES etc.) to provide centralized and coordinated access to bioinformatics tools training at UCSF. 

We are aware that there are a number of different groups at UCSF with expertise and interest in bioinformatics training and our aim with this proposal is to work with these groups to identify needs and develop solutions for translational medicine researchers.

Successful implementation of this new bioinformatics training service can:

  • Provide researchers with the skills needed to locate, learn about, and apply the information housed in bioinformatics databases.
  • Enable them to identify the right tool for their particular research question.
  • Improve collaborations between experimentalists and bioinformatics experts at UCSF.
  • Reduce time between data generation and insight by making bioinformatics training and tools available to those who generate data at the time it is generated.
  • Reduce the current burden on Core Labs that may not have time for basic bioinformatics training requests from researchers.
  • Establish an engaged user base through which the UCSF Library and Research Resource Program can continually evaluate services, identify unmet needs for training and resources, and design new services.

Plan

  • Collaborate with the UCSF Institute for Computational Health Sciences, leveraging their expertise in bioinformatics to identify researchers' training needs and develop solutions to meet those needs.
  • Form a Bioinformatics Tools User Group (user base for this pilot)
  • Survey user group to identify bioinformatics tools and training needs
  • Evaluate OpenHelix as an inexpensive, out of the box solution for bioinformatics training
  • Make Go/No Go decision on value of OpenHelix
  • Design and implement library-based home for resources such as OpenHelix:  Bioinformatics @UCSF Library
  • Collaborate with CTSI to create Marketing and Outreach plan for successful adoption of new service
  • Ongoing evaluation of web-based bioinformatics training service and development of a business model for continued support of the service.

 Criteria and Metrics for Success

  • Pilot user base includes bioinformatics subject matter experts as well as end users.
  • Good response rates on User-Needs and OpenHelix evaluation surveys
  • Steady increase in hits to library-based bioinformatics website 6 months after launch
  • Steady increase in utilization of OpenHelix tutorials 6 months after launch
  • Good response rate on User Satisfaction survey 1 year after launch of new service.  Metrics identifying most/least utilized bioinformatics tools.
  • Increase in User Satisfaction rating 2 years after launch of new service

Collaborators:

Julie Auger, Executive Director of UCSF Research Resource Program (RRP)

Budget includes funding for:

  • 2-year license to OpenHelix
  • Programming resources to integrate with existing web infrastructure (UCSF Library, Core Labs and Cores Search)
  • Minimal in-kind CTSI support for survey development, marketing and outreach plan
  • External trainers for on-site training on most highly valued bioinformatics tools.

 

APPENDIX 

1. Hyperlinks to current lists of bioinformatics tools

http://gettinggeneticsdone.blogspot.com/2011/06/resources-for-pathway-analysis.html

http://grouthbio.com/Genome_Software_Service.php

http://www.oxfordjournals.org/nar/database/a/

http://www.hsls.pitt.edu/obrc/         

2. Examples of Medical Libraries that host or are evaluating OpenHelix as part of their library-based bioinformatics services

  • Mt. Sinai School of Medicine Levy Library –successful implementation with 5-year renewal and fully booked on-site training classes
  • Becker Medical Library – Wash U
  • University of Illinois
  • Emory – Woodruff Health Sciences
  • University of Pittsburg
  • Weill Cornell Medical Library

3. Figure 1.  Awareness and training in bioinformatics tools can eliminate persistent roadblocks in the translational research workflow. 

 

 

Comments

This is great idea!  UCSF and Gladstone are top notch research institutions, yet in an era when Big Data permeates the research culture, we lack a cohesive plan for building stronger bioinformatics infrastructure and training opportunities.  There is considerable bioinformatics and computational biology talent on campus and this proposal could help bring together that talent to develop and implement such a plan.  Count me in!

As executive director of the National Resource for Network Biology (NRNB) and member of UCSF Mission Bay campus, I strongly support this proposal. The current state of training for bioinformatics tools and methods is disorganized and would greatly benefit from the proven leadership of UCSF Library's CKM. We have a growing collection of tutorial materials for network biology tools, such as Cytoscape and WikiPathways, as well as local expertise and presenters, which we would contribute to a centralized training program. Let us know *when* this worthy endeavour is funded and launched!

Thanks Alisha and Alex for your comments.  We see the OpenHelix resource as a great way to jump start this effort and get a critical mass of tutorials avilable to the UCSF research community quickly, and in a pretty cost-effective way.  The Library-based home for this training resource could then be optimized with links to advanced tutorials from in-house experts, announcements about upcoming training & seminars, etc, which would complement the basic tutorials OpenHelix has for resources such at Cytoscape and GenMAPP.

This is an excellent idea, but for CTSI funding, we have to make sure that the the program covers the CTSI National Consortium Education and Career Development Key Function Committee required competencies for bioinformatics. In addition, this effort should be coordinated with the CTSI Biomedical Informatics Program training activities.

Thanks for your comment Deborah.  

Our interest in Open Helix as a possible source for training material comes from the desire to quickly address an immediate need that probably falls under the general Biomedical Informatics core competency theme, that is - to help researchers understand the role of bioinformatics in study design and analysis of genomic, proteomic, and metabolic data, and also get hands on experience with the most commonly used computational tools, bioinformatics databases, and data repositories to help them formulate research questions.  Designing these tutorials from scratch would be outside the scope of our existing resources.

Yes, we definitely would want to coordinate this with CTSI Biomedical Informatics Program training activities. A library-based portal for bioinformatics training would be a great place to disseminate any educational materials (recorded webinars, tutorials, hands on exercises) created by the CTSI Biomedical Informatics Program, and to promote bioinformatics training events and courses that occur throughout UCSF.

This resource could also be a way to increase awareness of and provide training on bioinformatics tools created through CTSA funding such as openSESAME, as well as tools (and experts) needed for biostatistical analysis of data such as SAS and SPSS - software that is available through the Research Software Licensing arm of the library.

Thanks for your interest in the proposal.  I'll also contact you via e-mail for advice on resources for getting up to speed on CTSI Key Function Committees.

Overall, I think this is a great initiative, but it needs to be coupled with hands-on courses offered either through the library or facilitated by the library.  One of the major challenges we have in offering training on Cytoscape, Chimera, or other bioinformatics tools is management of the logistics.  In the past, we have worked with the library to offer a summer series on Chimera (see http://www.cgl.ucsf.edu/Outreach/Workshops/ at the bottom of the page) that was well attended (most sessions were completely subscribed).  I think OpenHelix might be a way to gain interest, but I think that to serve the deep needs of the UCSF community it needs to be coupled with hands-on courses.

 

Scooter Morris

Executive Director, NIH Resource for Biocomputing, Visualization, and Informatics

Director, Sequence Analysis and Consulting Service

 

 

Hi Scooter.  Thanks for taking the time to read through the proposal and for your comment.  I completely agree.  Hands-on courses and online training are very complementary in nature and this program could facilitate both.  The self-serve training model provided by online tutorials and exercises can fill an on-demand need for education and training, at the time that researchers need it.  But the level of training researchers require in order to understand how to apply these tools and databases to their specific research question requires hands-on training and examples from their peers and experts in the field. 

We have some really good momentum going right now with the Pathway Analysis workshops offered through the UCSF Library (oversubscribed at the moment), but the focus of those workshops has been on a single tool for analysis of 'omics data (network analysis, functional analysis, pathway enrichment).  Now would be a great time to build on that momentum and line up some additional training/courses on tools and databases that are already in demand here at UCSF.  I think that is an effort we would like to work with you and other groups on (e.g. the Gladstone Bioinformatics group) regardless of whether this proposal is funded. 

Adding to the bioinformatics training toolkit with a library of on-demand/self-serve tutorials can fill in the gaps when researchers have that immediate need.  And any materials (recorded webinars, slides, etc.) from hands-on courses from UCSF experts would find a home there as well.

Thanks again for your suggestion.  I'll contact you off-line about courses and training you would recommend in the short term.  The library is very well suited to facilitate those courses.

Commenting is closed.

Biobank Inventory Software Evaluation

Proposal Status: 

Biobanking Inventory Software Evaluation

Rationale:  Access to high quality human biospecimens and associated clinical data is essential to translational and clinical research programs.  Effective and efficient use of human biospecimens is an important tenet of our role as community-entrusted stewards of these valuable resources.   These points are reflective of the conclusions of the CTSI funded (2008-9) Tissue Task Force that engaged Huron Consulting as well as a recent 2011 audit of UCSF tissue banks by UCSF Audit Services. 

It is estimated that UCSF currently maintains 50-200 biobanks – some of them program based (i.e. Cancer Center Tissue Core) and others are PI specific.  However, many researchers remain unaware of the breadth and depth of biospecimens available on campus. Ready access to available inventories would improve utilization and enhance our stated role as community-entrusted stewards of these valuable resources. The UC BRAID Biobanking Work Group, comprised of representatives from all 5 UC academic medical centers, has also identified inventory access as critical for building a UC Regional Research Network across all 5 UC centers; this Network would improve Californians’ access to clinical trials and support critical biomedical research on a broad array of health issues.  The largest access challenge is the variety of inventory software systems employed by the various banks.  The extent of this diversity and barriers to sharing information is not clear currently.  This proposal requests funds to: 1) Conduct a UC-wide assessment of the various inventories in use; 2) Identify the functional needs of each participating bank; 3) Evaluate the feasibility of a common interface that would allow biobanks to choose to share information while retaining their existing inventory software system (low-cost) or determine if broadly viewable inventories are only possible when using a common electronic tool (high-cost), and; 4) Understand biobankers perceived benefits and barriers to increasing researchers’ access to their biobank inventory.  An additional benefit would be a comprehensive list of UCSF Biobanks available for researchers to place deposits. 

Plan:  A Program Manager, hired through the UCSF PMO, will be employed to facilitate this assessment.  He/She would develop an assessment plan and interview UCSF biobank managers to establish current practices and inventory systems as well as to understand of the perceived benefits/barriers to increasing access to biorepositories.   This will inform the process to engage Biobankers at the other UC campuses through UC BRAID.  Additionally, information about the perceived benefits/barriers will help us to identify areas of concerns that must be addressed, and educational programs that may be developed, in order to create a system that will support greater sharing of biosamples at UCSF; this information can be used for similar purposes at other campuses as well.  In addition, the survey will provide a definitive list of biobanks for deposits and withdrawals, and will facilitate integration of biobank inventories with clinical data from the Electronic Medical Record in the Enterprise Data Warehouse, currently under development.

Criteria and metrics for success:  Deliverables from this project would include a list and description of all biobanks at UCSF including the availability of the specimens to the research community and identification of relevant information stored for each specimen; a technical assessment of each of the inventory software systems in use; and a compilation of identified benefits and barriers to opening access to inventories.   This would facilitate the determination of a common interface for the inventories to better mine the data stored. 

Project Total cost:  $53,150.  Request to CTSI:  $50,000.  The remainder of the project costs would come from Research Resource Program (RRP) funds. 

Program Manager to set up the program, identify needed information, conduct Biobanker interviews and write up the assessment:  2.5 days/week for 6 months at $175/hour:  $21,000

Program Staffer (Analyst II or III) to conduct Biobanker interviews and enter data into project database:  50% effort for 6 months $24,150 including benefits (base salary $69,000, median Analyst II salary)

Technical Assessment Staff:  2 weeks of technical assessment by programmer to determine potential inventory interfaces and provide recommendations:  $100/hr x 80 hours = $8,000. 

 

Collaborators:  Cancer Center Translational Informatics, directed by Sorena Nadaf, has significant experience with software assessment for use on an enterprise level. 

UC BRAID Biobanking Committee, chaired by Sarah Dry, UCLA, will provide insight into appropriate interview questions and UC-wide biobanking needs.

Comments

Hi Julie,

 

I really like this idea of providing a systematic way for researcher to identify biospecimens. I'm thinking in terms of writing these resources into grant proposals--it certainly would be beneficial for an investigator to be able to leverage current resources, and to do so, they obviously need to be able to find if these resources exist. Along those same lines: could the system you describe also be used by researchers looking for a bank in which to deposit their own biospecimens? It seems that is another lacking resource, and one that could potentially be within the scope of what you describe.

Hi Erin -

While the focus of the project is to assess the various software tools used in UCSF biobanks, the project would certainly provide a better inventory of UCSF biobanks than we currently have.  Having that infomation available centrally would have a real positive impact on our research infrastructure and allow better banking of precious human samples.  Adding this point to the current application will certainly strengthen it.  Thank you.

Julie:

 

This is a great idea.  I think that instead of 're-inventing the wheel' there should be a way for investigators to have access via a web based program what specimens (animal, human) are available for research at UCSF and other UC campuses.  This would be essential for young faculty who may have limited funds to collect and store their own specimens. 

One thing to consider is that some biobanks do not store the clinical data associated with their specimens.  The PI of a particular project will have the data assoociated with stored specimens. Thus if there is way to connect the specimens and their data would be of value.

The idea of evaluating what software is currently being used by biobanks would provide more information as to how many biobanks are at UCSF and to determine if there are sofware needs of our biobanks.

Good point, Yvonne.  It is true the the associated data will differ for each collection.  Thus finding an interface to link existing rather than force all inventories into the same "box" would be very beneficial.  If the proposal is funding, this will definitely be part of the scope.  Thanks for your input.

The Biobanking needs assessment is a great idea; will there be an opportunity for the UCLA CTSI to participate in the survey or to use the survey tool and methods to conduct the needs assessment at our CTSA? This might be an opportunity for all the UC CTSA evaluation offices to conduct a comparable needs assessment and the initiative fits very nicely as part of the larger BRAID Biobanking Workgroup efforts. How can we participate or replicate the study at our UCLA CTSA? Nice Work!  

 

Hi Pamela - Thank you for the comments.  I agree that it would be a big plus if we can extend participation in this survey to the UC BRAID institutions.  Adding that to the application text will make it a stronger application.  The initial funding may not be sufficient to include UC BRAID participants in the first phase, but there will be strong impetus to do so in a later phase.

 

This project will be very helpful to eventually make existing biospecimen resources more readily visible and available. The project focuses on evaluation of biorepository inventory software, and I believe it could be considered to include in the assessment what kind of LIMS progams biobanks are using to record specimen processing and analysis within the core, both in collaboration with other cores as well as with investigators.

Hi Hubert - that is a great point.  Including gathering this information during the survey would make a lot of sense.  Thank you.

Seems a great effort to establish, my recommendation would be to include an evaluation how the biospecimen data are or can be connected with clinical warehouse information coming from the EMR data warehouse. These should be made easily compatible.

Thanks, Laura.  I'll include this point in the proposal.

Commenting is closed.

Advocacy Impact on Research Agendas

Proposal Status: 

RATIONALE: Realizing that the benefits of the current revolution in biology and oncology would be enhanced by vigorous public support, for the past twenty years, the UCSF Breast Oncology Program (BOP) has implemented comprehensive strategies focused on leveraging advocacy engagements. To help transform the conduct of clinical and translational research, advocates support a wide range of NCI sponsored Specialized Programs on Research Excellence, (SPORE) and Physical Sciences and Oncology Center (PS-OC) Cancer Center projects, as well as multi-site grants sponsored by Department of Defense (DOD), Komen, California Breast Cancer Research Programs (CBCRP), Stand Up to Cancer (SU2C), and Translational Breast Cancer Research Consortium (TBCRC). Applying core principles that forge synergy with NCI Advocacy Research Working Group Recommendations: strategic innovation, collaborative execution, evidence based decision-making, and ethical codes of conduct, researchers and advocates interact across a broad spectrum of partnership modes at various levels of intensity. Participating in four areas: 1) research and programmatic support, 2) education and outreach, 3) policy and strategy, and 4) representation and advisory, UCSF breast cancer advocates are at the forefront of efforts to revamp translational research processes. For example, advocates involved in grants meet on an ad hoc basis with the study investigator and team. Infusing the patient lay perspective into discussions, trained research advocates vet hypothesis, define strategic priorities, address research challenges, and incorporate innovative science and study design into research projects/clinical trials. Importantly, promoting cross-sector and trans-network collaboration, advocates catalyze change in research practices by identifying systematic barriers to research efficacy, effectiveness, and expediency.

CHALLENGE: As champions of precision medicine, evidence based practices, Bayesian statistics, adaptive trial design, biospecimen standardization, biomarker validation, improved test result reproducibility, and informed consent reform, advocates are driving change in research practices as well as in FDA initiatives leading to new regulatory and interpretative mechanisms. Moreover, forging credibility, advocates now have a dramatic presence as authors in scientific papers, presenters at scientific conferences, voting members in scientific advisories, cooperative groups, data safety monitoring boards, planning committees, protocol/peer review committees, and informed consent working groups. Yet, despite the growing momentum of advocacy in the support and reform of clinical and translational research, collateral challenges remain. Because advocacy efforts often aim for outcomes that are hard to measure, capture, and operationalize, unique quantitative and qualitative approaches are needed to systematically assess the impact of advocacy messages and activities. Lessons learned from successes and failures will allow us to continuously optimize efforts and promote best practices to other UCSF advocacy groups.

PLAN & METRICS FOR SUCCESS: Three innovative methods are being developed specifically to respond to advocacy’s unique measurement challenges:

1.  A static table will be used to create an interactive model for measuring progress in real time.

2.  Stakeholder surveys that gather advocacy stakeholder perspectives and feedback will be used to assess the extent to which researchers support advocacy involvements and whether the support is changing over time.

3.  Logic models and systems mapping approaches will be developed by a committee of researchers and advocates to monitor performance, address challenges, and identify “best practices” and key entities to guide infrastructure improvements.

Arming stakeholders with the tools to make advocacy activities more transparent, recommendations more targeted, and outcomes more impactful, participants will formally assess the extent to which advocacy processes are improving the conduct of clinical and translational research.

JUSTIFICATION: Collaborative team science provides a starting point for comprehensive change and advocacy activities are being looked at as a shining example of how research advocacy will help spur medical innovation, democratize science, and expedite the incredible potential of future investments in bioscience. To sharpen and shape the vision for capacity building and bi-directional mentoring and transnetworking opportunities, advocates and other stakeholders will participate in two workshops specializing in evaluation and strategy development for advocacy and policy change efforts.

COST: A total budget of 50K is requested: Project coordinator 24K; Advocacy Training ProjectLEAD 5K; Workshops: 2 UCSF Workshops on Advocacy Impact Metrics 12K; Infrastructure:  Training Materials and Modules for UCSF Outreach 9K.

COLLABORATORS:  Susan Samson, Linda Vincent, Susie Brain, BOP science advocates, Hope Rugo (clinical lead PS-OC), Sarah Goins (coordinaor BOP), Laura van ’t Veer (Leader BOP), Susanne Hildebrand-Zanki (Associate Vice Chancellor Research), Elizabeth Boyd (Associate Vice Chancellor Ethics and Compliance)

Comments

It's exciting to see this process taking shape.  I believe the proposal would be strengthened by a clear goal statement up front, such as: to elucidate what is gained and lost through advocate involvement in the development and conduct of research, and in the grant application review process. 

 

Development of a logic model would be appropriate as first aim under that goal.  This process can be informed by open-ended interviews with researchers, policy-makers, and advocates (conducted by an evaluator or researcher with no vested interest).  A subsequent web-based survey might be developed based on the concepts identified in the interviews for measurement on a large scale.  It might also be important to aim to identify optimal characteristics of advocates associated with each process (eg training, personal experience, skills, etc).

 

It's not clear what the static table is.  Suggest either providing more detail or deleting.    

As indicated in the proposal, we need to acknowledge that advocacy programs and processes are hard to measure and operationalize. Once we have done so we need to understand that a comprehensive approach is required to elucidate what is gained and lost through advocate involvements. Without question, as suggested, the proposal will be strengthened by the progress and ongoing activities related to specific goals and aims. The major goal and associated aims are summarized below:

 

GOAL: Incorporate innovative evaluation methods for assessing advocacy involvements and policy change efforts in the development and conduct of research, in the grant review process, and in programmatic activities.

 

Aim1: Assemble a taskforce with researchers and advocates to develop a logic model as a valuable tool for program planning, development, and evaluation. The logic model connects the dots between resources, activities, outputs, and impacts.

Aim 2: Develop and implement stakeholder surveys or interviews that gather both advocacy and researcher perspectives and feedback.

Aim 3: Obtain case studies offering detailed descriptions, often qualitative, of individual advocacy training, prior experience, skills, strategies, efforts, and results.

Aim 4: Develop and implement a subsequent web based survey based on the concepts gleaned from the interviews. Advocates rate researchers and researchers rate advocates on scales that assess support for, and influence on, the issue.

Aim 5: Ensure that stakeholders have adequate training and mentoring in evaluation design.

 

We wish to clarify that the "static table" is a grid that highlights specific activities and outputs: i.e., meetings and workshops attended, coalitions formed, presentations made, etc. This does not address quality. The quality of outputs is assessed in Aims 1-5.

 

Importantly, we wish to emphasize that our program for integrating advocates in research is based on the premise that scientific progress requires multidisciplinary expertise and a new team science collaborative paradigm. All of us, researchers, clinicians, and advocates are responsible for promoting bi-directional science advocacy exchange. Without this shift, an understanding of how advocates catalyze change in research practices is improbable.

See the comment by Susan Samson, one of the Bay Area Breast Cancer Science Advocates, and co-writer of the proposal for the reply.

To develop a system evaluating and helping to better understand the important impact of advocates in translational sciences is of high importance. I suggest incorporating in this project as an additional long term aim/benefit that the lessons learned will enhance the ability to recruit new advocates for research projects.

Patient voices are becoming increasingly important in driving the research agenda. In particular in the ere of precision medicine where patient tailored screening and treatment management is advocated, only the consumers can indicate the riorities. To ensure a borad representation of opinions one of the goals of this project is to engage more science interests advocates.

I agree that it is important to develop quantifiable measures of having advocates involved as basic science transitions into individualized therapy. The proposal is a good beginning at developing ways of evaluating the impact of advocates. To develop such model it is essential to understand what is the role of the advocates, It is also important to recognize that the information must be bidirectional  the advocates must be educated to understand the nature of science and its limitations, for example, what  statistics really mean. The proposal needs more specifics on how feedback from such evaluations will be used to make the integration of advocates into the research process more effective.

 

As the proposal stands it has several strengths in that it proposes innovative methods. Building this new type of collaborative team science has the potential of creating more effective treatment as well as participation of patients in the development of the model. 

 

It also has some weaknesses. Overall, it is not clear over what time frame this evaluation will take place. It should take place over several years, so that improvements can be phased in. It needs some plans for how to involve the advocacy community even more effectively.

 

The language describing the proposed evaluation methods is not very accessible to non-experts. For example, in the first goal, I don’t understand what a static table is and how that contributes to the evaluation of the progress. Similarly, in the third goal, what are “logic models and systems mapping approaches”. Will they be training of the advocates and scientist/clinicians involved in the evaluation methodology

Hi Zena,

Thank you for acknowledging the diversity of needs, concerns, and values of advocates who participate in the BOP. And, thank-you for recognizing how the impact of advocacy on research agendas has been difficult to capture and measure.

Challenges notwithstanding, we argue for the importance of developing systematic approaches for gathering qualitative and quantitative information that can be used to consider the contributions and implications of advocacy in basic science and translational breast cancer research settings. Innovative methods are being developed specifically for assessing advocacy influences and policy change efforts in the support and reform of team science collaboration as well as research processes and methodologies.

Concretely, feedback from the evaluations will capture the ways advocates focus on consent practice improvements, clinical trial transparency, patient safety, biomarker validation, tissue acquisition, data reproducibility, data dissemination, and survivorship issues, among others. Evaluation approaches will determine how advocates trained in the basics of science and statistics, are bringing about changes in epistemic practices as well as the design and conduct of research. For example, as powerful non-traditional allies who focus on the values of  patient empowerment, their responses will indicate how they  address hurdles and bottlenecks in research processes, help establish formal mechanisms for moving highly technical evidence-based agendas into clinical practice, and thereby advance their own strategic goals within science. And yes feedback from the  evaluations will also offer insights into some of the tensions and limitations in the process, i.e., gaps in bidirectional mentoring and capacity building efforts.

We wish to  also clarify that:

1) Logic models and systems mapping approaches are evolving evaluation methods defining how data can be collected and used to identify advocacy goals, rationales, assumptions, resources, outputs/activities, long and short-term outcomes.  These methods are tools for assessing what strategies are working and what needs improvement.  For example, systems mapping is a qualitative approach that entails visually mapping a system to identify the parts and relationships that are expected to change, how the advocacy effort is trying to achieve change, and then identifying ways of capturing whether those changes have occurred.

2) In the proposed workshops, there will be training of both advocates, scientists, and clinicians on relevant, timely, and efficient evaluation methods.

3) The "static table" is a grid that highlights specific activities and outputs: i.e., meetings and workshops attended, coalitions formed, presentations made, etc. Please refer to the Samson reply of 3/15/13 for further thoughts on this issue.

 4) We agree. The time frame must take place over several years to determine whether a strategy is making progress or achieving its intended results.

 

 

-Susan Samson

Breast Oncology Program Science Advocate

Hi Laura

This is an incredibly important piece of research, yet needs to be implemented quite deliberatively. Have you thought of creating a structure within your research plan to establish a "governed" feedback loop for the community memebers concerns/needs to be heard, addressed and possibly implemented? 

Hi Courtney,

 

yes indeed we thought about such a governed feed-back system and that will be included in our development phase. We have several patient advocates in the group who will work on this, who all are extensively trained on advocacy participation in research agenda's. No one actually has really formalized feed-back loops and our intent is to indeed spewcifically provide that in this project.

 

Thanks for bringing it up,

 

Laura

Definitely a project that needs to be done, not only to demonstrate the great strides the UCSF breast cancer advocacy community has already made, but to aid in moving forward. 

It sounds like you are building the evaluation tools from scratch.  There are a number of advocacy evaluation tools/toolkits available that you may want to consider using, at least as a foundation to get things going.  The Robert Wood Johnson Foundation and Mathematica Policy Research put together an Advocacy Evaluation Toolkit that was used to evaluate an advocacy effort related to public policies and health insurance coverage.  It’s available on the RWJ website.  The Washington D.C. based Aspen Institute has an advocacy evaluation program and online tools as does Innovation Network. 

 

Hi Michael,

Great suggestions. We have already looked into resources/toolkits provided by Innovation Network, the Aspen Institute, as well as a plethora of material developed by the NIEHS and NIH on methods in advocacy evaluation. This proposal draws insights from these programs and we are guided by the experience and mentorship of  these thought leaders.

-Susan Samson

Breast Oncology Program Science Advocate

 

Commenting is closed.

An Early Translational Researcher’s Framework for Health Economics Evaluation

Proposal Status: 

Rationale

Economic considerations are important in determining the development and use of a new technology in any scientific field. This is particularly true in healthcare, where overall spending in the U.S. is at an unsustainably high rate, thus making cost consciousness imperative in the development of new technologies. Researchers who desire to translate new understandings of disease mechanisms into commercially viable methods of diagnosis, therapy and prevention often neglect or struggle to incorporate health economics into the assessment of the potential value of their innovations. We propose to develop a framework for health economics evaluation, with insights, tools and relevant case studies, specifically targeted to support the research and development goals of early translational researchers. The primary objectives of the framework are:

  • to highlight some key economic factors that lead to the adoption or rejection of new technologies into clinical practice, and
  • to help researchers better frame the potential economic value of their innovations

Supporting an Unmet Need

In the Catalyst Award Program at UCSF CTSI, early translational researchers receive targeted advice on product development and commercialization strategies, in addition to seed funding. These key insights, provided early in the translation process, have often led researchers to significantly recalibrate their research and development goals and approach to maximize the potential clinical and commercial success of their innovations. Health economics and cost-effectiveness are consistently raised as critical issues in the potential success of new technologies reviewed in the program. A complete economic evaluation of any proposed technology is far beyond the goals and capabilities of the program. However, an accessible framework that  provides key insights, tools and case studies to help researchers understand the contributing factors to the economic evaluation and adoption of new technologies, would be invaluable.

 

An Initial Pilot Project Focused on Medical Devices

Since the proposed framework will be driven by a limited number of case studies, we will maximize its value by initially developing and validating our approach for medical devices. If successful, a similar approach will be used to develop frameworks for therapeutics, diagnostics and digital health technologies, with the expectation of shared and technology-specific elements within each framework.

We will leverage the knowledge of key experts in health economics and policy as well as  technology adoption at UCSF. Additional independent experts in health economics, cost-effectiveness, reimbursement, and new technology assessment will also be identified and consulted.

  • Pilot Evaluation: A panel consisting of UCSF and external experts will initially be convened to discuss how early translational research projects can be best evaluated using the limited resources available to researchers. The panel will recommend and evaluate up to 5 medical devices whose development and successful (or failed) clinical adoption can be used to highlight important aspects of health economics.
  • Draft Development: The output of these reviews will be translated into the first draft of a framework for economic evaluation and is expected to include case studies, methodologies, templates, and resources.
  • Rapid Testing: This draft framework will be pilot-tested with 5 UCSF researchers involved in medical device projects that have been reviewed by the Catalyst Program. Their experience with the framework and their resulting economic evaluations will be assessed and discussed by the expert panel.
  • Finalize: The results and assessment of the pilot study will be used to develop a final framework for broader distribution.
  • Distribution: The final framework will be distributed through the Catalyst Award Program, The Center for Healthcare Value, and other CTSI and UCSF channels, other CTSA network channels, and outreach to relevant publishers, etc.

Short and Long-Term Success Metrics

The short-term success of this approach and of the framework will be determined by the results of the pilot study and the content of future Catalyst Award applications. Long-term success will be measured by the ability of researchers to better understand and incorporate economic factors into their translational projects. Project tracking and feedback mechanisms within the Catalyst Award Program will be used to monitor and evaluate such measures.

 

Collaborations

This project will be seeking to include a multidisciplinary group of health economics stakeholders, including key researchers at the Institute for Health Policy Studies at UCSF, health economists, reimbursement and technology evaluation experts. Lisa Schoonerman and Ruben Rathnasingham, who will lead the project, have extensive healthcare product development and commercialization experience. This pilot project, focused on medical devices, will require a budget of $15,000 for consultants, meeting expenses, production and promotion.

Comments

Intriguing, but you may need to be more specific about what it is you want investigators to better understand as they develop their early translational discoveries.  For example, you talk about 'economic value' but I'm unclear what exactly you mean.  Do you mean that you will be providing tools for investigators to understand how their intervention will or will not provide value were it to be successful (benefit/cost)? 

Mini, thank you for comment. You make a great point. We will work to better articulate our objectives over this open improvement phase.

Investigators generally appreciate the value and importance of a cost/benefit analysis. What is not as broadly understood is the many factors that contribute to ‘cost’ and ‘benefit’, and how these factors potentially affect the decision to adopt new technologies into clinical practice. Our goal is to develop a framework that provides key insights and tools to improve this understanding, and empower early translational researchers to ask the right questions that will guide them in maximizing the clinical and commercial success of their innovations.

This looks very intereting and will be a new service for UCSF investigators.  Once you are ready to start providing the consultation, please consider partnering with Consultation Services.  We could assist in the promoting, provide on-line request forms and track the scope of consultation over time.

Alice, thanks for your comment. Our hope is that this framework will inform researchers of some of the economic considerations that go into the decision to develop and commercialize a product. The framework may help them with a preliminary assessment of their projects, but most importantly, it will empower them to think through potential challenges and opportunities, and reach out to the many resources at UCSF, such as the new Center for Healthcare Value, for feedback. In that regard, we would love to get your support on any resulting consultations.

I agree that this would be a very useful tool for projects that are aiming to improve health care value.

It would help to provide a little more information about the conceptual framework for what you are proposing. For example, what are the dimensions of the key inputs and outputs?  Who are potential experts at UCSF and elsewhere that could be tapped to provide input in these areas.  It will also be important to specify from which perspective you will be forming the economic analysis/model... e.g. the health care organization that adopts the device, the insurance company that pays for the device, society at large, or the company that will be formed to manufacture and sell the device?

Ralph

Thank you for your thoughtful comment and insightful questions.

As you mention, the idea of providing translational researchers with a resource to help them better understand the role of health economics in the development and commercialization of their innovations would be very valuable. We see this question arise in most of the projects we review in the Catalyst awards program.

However, we also understand the broad scope of this issue, and the proposed pilot framework will necessarily be limited in scope to:

  • a resource targeted to medical device researchers (one of the four categories in the program: devices, diagnostics, therapeutics, and digital health)

  • a pilot resource built on the needs of a small, but representative number of UCSF medical device research projects

  • a resource that provides sufficient information and insights for researchers to understand the perspectives of key stakeholders, and to help them begin adopting these perspectives into their research projects.

Potential key inputs: A set of inputs will be initially proposed and then prioritized by the expert panel. They will include clinical and market need, target indication and market size, product claims, estimated product development resources, impact on risk/benefit profile, existing solutions, reimbursement profile,  delivery profile, and other factors that key stakeholders consider, and are accessible to researchers at UCSF.

Potential key outputs: key stakeholders, their objectives and dependencies. Top level description  of health economic assessment methods and tools used by key stakeholders. Case studies that highlight how these tools are used. The output may take the form of decision trees, flowcharts and interactive workbooks.

Potential UCSF experts include leaders at the Center for Healthcare Value and the Institute of Health Policy Studies, such as yourself, and others, who can not only help guide the creation of the resource, but also how it is delivered. We also hope to engage with key decision makers at the Medical Center. Outside UCSF, we will involve leaders from the medical device industry, and representatives of private and public payors. We have engaged with some potential panelists through our work at CTSI, and our first task will be to identify individuals who are available to help.

Most of the researchers we work with are not primarily focused on improving health care value. They are often driven by the effectiveness of their innovations and the potential to improve health. The impact of health care value on the development, adoption and delivery of the innovation is often misunderstood or ignored. Our goal is to engage these early researchers by positioning health care value as a key factor in determining the likelihood that their innovations will get the required support at every point in the research, development, commercialization and delivery value chain. We believe that this approach will require the identification and some consideration of the perspectives of all key stakeholders.


Commenting is closed.

Improving the consent process for complex studies: making the study understandable to subjects

Topics: 
Proposal Status: 

Rationale.  Consent forms for clinical trials have expanded and now often range from 25 – 30 pages, in spite of the fact that studies have documented the inability of most research participants to retain this amount of information.  In addition, many consent forms include technical, medical, and legal language that is difficult for a lay person to understand.  When these consent forms are reviewed by the Committee on Human Research (CHR), concerns about their length, readability and complexity are often raised.  Investigators may consequently spend a great deal of time editing consent forms to satisfy CHR requests.  However, much of this language is mandated by study sponsors, so investigators may struggle between meeting sponsor requirements and revisions requested by the CHR to improve readability.  Although investigators or study staff review study procedures and consent forms with patients, these long, complex forms are unlikely to be conducive to true informed consent, particularly among patients from vulnerable populations.

 

Plan.  We propose to test the feasibility, acceptability, and usefulness of a simple 2-page summary sheet, written in plain language, to improve the consent process, enhance subject understanding of the research study and its risks, and the CHR review process. 

This project has the following primary goals:

  • Develop a template and guide for investigators to use to create the research summary, and evaluate the ease of preparing the summary among a sample of investigators and their research staff.
  • Evaluate the readability and literacy level of summaries prepared by investigators and/or their staff.
  • Evaluate differences in subjects' understanding of the research protocol through use of the summary, using a short knowledge test after the standard consent procedure and then after reviewing the summary.
  • Assess research subjects’ evaluation of the summary in terms of comprehension of the research protocol and the study risks, and their satisfaction with the consent process.
  • Assess both investigators’ and CHR members’ satisfaction with the review process when the summary sheet is included.

An unintended effect of this process may be to heighten investigator awareness of using plain language in consent forms.

 

Criteria and metrics for success

  • A template for creation of a research summary is developed.
  • Using the template, creation of the research summary is judged by investigators to require little time.
  • Readability of research summaries is found to be at 8th grade level of lower.
  • Research participants report that the summaries improve their understanding of the research study and its risk and benefits.
  • Investigators and study staff report that the summaries improve the consent process and perceive that patients have a better understanding of the study.
  • Investigators report fewer difficulties with consent form review process.
  • Investigators report that sponsors did not object to the 2-page summary or delay the study as a result.
  • Investigators report that they would continue to use the summaries.
  • CHR members report that the summaries improve their understanding of research protocols and lessen their concerns about complex consent forms.

 

Approximate cost and justification

Total Budget: $48,854. Salary support for UCSF faculty and research assistant involved in developing and evaluating the materials.   Incentives for investigator, research staff, and research subject participation.

 

Collaborators.  Patricia Katz is Prof of Medicine, has been a member of the Laurel Heights CHR Committee for 10 years, and is currently Vice-Chair of that committee. John Heldens is the Director of UCSF’s Human Research Protection Program. Jennifer Barton is Asst Prof of Medicine whose primary research focus is health literacy.  We especially seek clinical trial investigators willing to test the new materials as collaborators.

Comments

Great idea. Since this isn't my field, I'm wondering if there is a prior literature on this approach to improving the informed consent process?

There is literature demonstrating that information from consent forms longer than about 4 pages is not retained.  I'm not aware of studies using this approach -- will have to search the literature a little more.

This is a critical issue to address as consent forms have become incredibly long and often more confusing than helpful to study participants. I like the idea of a brief study summary to help potential participants improve their understanding of the study procedures. Do the investigators have any concern that the 2 page summary might be used in lieu of the complete consent form? That is, if all of the information in the complete consent form is deemed necessary by the CHR for participants to review, will the 2 page summary provide an "easy out" for PIs and research coordinators to only discuss those key items listed in the summary, rather than reviewing the more complete and detailed information in the consent form?

That's a valid concern.  I think we would have to depend on the good faith and intent of investigators to conduct the consent process using the entire consent form.  We do that now, assuming that the complex forms are supplemented by investigator/staff explanations.  The summary would only be intended to facilitate patient understanding.

I love this idea, and agree that the consent form process has become a big issue. 

 

I'd also love it if the UCSF Subject Authorization document could be incorporated into the regular consent.  It's difficult for patients to understand why they need to sign multiple consents, and the current language of that document scares many potential study patients off.

Interesting suggestion.  The summary could easily add a "boilerplate" statement concerning the HIPAA form.

I highly agree with the above comments that this is a really important issue and a simple summary sheet may be helpful in providing research participants additional clarity about what they are signing.  One question I have is how you will be evaluating whether participants have "better understanding" with use of the summary sheets?  Is the plan to do a comparison within studies where some patients receive the summary sheet and some do not and then see who knows more about the study?  I'm curious because we're in the planning stages of a RCT of various forms of consent for biobanking and we're still trying to figure out the best way to evaluate understanding.

Ideally, I would follow your suggestion of doing a comparison within studies.  But given the time and budgetary constraints, and viewing this as a pilot project, I plan to use a short knowledge questionnaire about the study after the regular consent process, and repeat it after presentation of the summary, along with asking subjects' to provide their subject ratings of the helpfulness of the summary sheet.

 

For your RCT, a knowldge questionnaire might be helpful.  Rebecca Sudore has done this, so you might talk with her.

I've written many informed consent documents, and to meet the present requirements is quite difficult and time-consuming.  So the idea of making it "shorter" is appealing.  This is from the point of view of the scientist.  From the Subject's point of view, short is appealing, but undesirable if there are parts that are unclear or not even mentioned, that the subject may be concerned about.

 

The conundrum is this:  How can you be both "simple" and "detailed" in the same document?  I've been working on this for years, and there IS a solution, which I call the ExpandingOutline.  This works especially well on Computer presentations.  You can see the idea AND the presentation at: http://expandingoutline.webcompendia.org .  If this method is used on a computer, you can also monitor what parts the Subject has expanded, thus indicating interest.  You might even want to place some requirement that the Subject MUST have opened "Risks" or the Subject cannot participate.  Another use of a computer to monitor what is read, would be to document the parts that the Subject did NOT look at, so that the recruiter could call attention to those parts as part of the consent process. This data would also be important for re-writes and changes in terminology, since it might identify what does or does not work in terms of alerting the Subject to the items that concern the CHR.  I suspose that if the software were available to summarize each Subject's path through the ExpandingOutline, and then to summarize across Subjects, this information would be of considerable interest to the CHR, so that they could offer critiques based upon data, rather than their own impressions from reading the Consent Form. (I've been on a CHR for a few years, myself.)

this is an important an interesting idea.  the literature is replete with examples of readability of consent forms being too high for potential participants.  this is an even bigger issue with vulnerable populations and in health disparities research.

 

one approach is to add a simple short plain language summary for potential participants (not instead of the consent form mandated by sponsors or campus policies but in addition to). 

 

we took this approach with HIPAA Notices of Privacy Policies in US Dental Schools creating a simple plain language addendum to supplement the ones approved by institutions' lawyers (Ha & Gansky, 2007). 

 

what steps will you take to enhance sustainability after the pilot is over?  training materials including tips on how to write short, simple plain language summaries might be one way to increase the usefulness beyond the boilerplates developed during the project for future needs.

 

 

 

How to enhance uptake and sustainability is an important question.  As part of the pilot, we want to evaluate how well investigators are able to adhere to the simple plain language.  This should guide efforts for training after the pilot.

This is an important topic with, as others have noted, especially for vulnerable populations. Also as noted, there has been considerable work in this arena, including this study in the Emergency Medicine literature (Dresden GM, Levitt MA. Acad Emerg Med. 2001 Mar;8(3):246-52). With its high volume and many patients who could  benefit from a simplified consent process, the SFGH emergency department is a prime area to include in such a study. We have a robust research assistant program and would be happy to be collaborate or assist in any manner. 

Thank you.  The SFGH ED would be a great place to test this.

I think this is a great idea. Do we happen to have a current evaluation of the ROI from implementing this  improvment?  The time and/or $$ savings from study teams and CHR could be significant. 

I don't think we can get an estimate of ROI until we do the pilot project.  That would give an idea of time savings, if any.  But, even without time savings, I think it would be worthwhile if it improved patient understanding and satisfaction with the consent process.

Great idea and definitely a tool that can be used campus-wide. Many participants get lost in all the pages of a consent form - simplifying the informed consent document will benefit participants understanding of study involvement and expectations. Providing templates is a great tool for investigators and will hopefully speed up the review process on the CHR end.  I do agree with others comments that this is a huge bonus of vulnerable populations and emergency room care; but can see the challenge in evaluating what is "detailed enough" for the info sheet. It might be a good idea to test with a sampling of protocols with different levels of complexity, phase I-IV, and variety of therapeutic areas to see if the template still holds.  

Commenting is closed.

Mobilization of Clinical Research Services (CRS) – Pilot Project

Proposal Status: 

Rationale

This pilot stems from the 2012 awarded project of ‘Improving CRS performance through application of Lean/6 sigma’, intended to maximize the efficiency and impact of CTSI’s Clinical Research Services (CRS) program across UCSF and other affiliated institutions.  Current CRS Services are restricted by the locations and often higher-than-needed skillset to serve researchers across campus, resulting in low utilization of resources towards serving research (<50% of research utilization for Nursing services).  “Mobilization”, or training a number resources to apply their skillset at any clinical location across campus, will not only maximize the utilization of CRS services, but will also maximize the number of researchers and studies serviced across UCSF.  This project will focus on building the training, legal, and technological infrastructure for mobilizing CRS resources that includes nurses, nurse practitioners, and phlebotomists, across clinical settings found under UCSF. 

Plan

* Survey of existing mobile outpatient team concepts to leverage best practices.

* Based on CTSI/CRS strategy and current needs for research studies, identify the appropriate department(s) across UCSF campus where the mobilized resources will be piloted. The preference is to identify two departments/groups, one of which is already affiliated with CRS. 

* Perform a gap-assessment on the requested skillsets by researchers and available skillsets under CRS to meet the research demand. 

* Select the appropriate number of CRS resources to enroll in the mobilization program, along with modifying their respective job descriptions and legal contracts (i.e. union).

* Identify the system for requesting, scheduling, and billing for the mobilized resources. 

* Select the appropriate technology (if needed) for the mobilized resources to record and complete needed documentations as requested by study teams.

* Identify the process for mobilizing medical supplies as needed based on their availability at the studies’ clinical site(s).

* Measure the utilization, impact, and ROI on the project throughout the pilot period to determine its success and potential expansion to remaining CRS resources or other programs. 

Criteria and metrics for success

The anticipated success for this project is to increase the research utilization of the mobilized resources by study teams when compared to their pre-mobilized performance.  This success will also be coupled with the increase in satisfaction of serviced PIs/study teams and patient participants.  Below are few metrics that will be measured to gauge the performance and success of the project throughout its lifecycle:

* Number of departments under UCSF serviced by CRS mobilized resources

* Number and type of studies serviced by CRS mobilized resources

* Number and title of PIs serviced by CRS mobilized resources

* Satisfaction of PIs/Study Teams/ and Patient Participants * Capacity and utilization of CRS mobilized resources

Approximate cost and very brief justification ($50K max)

The anticipated cost of this project is $50K to support a Project Lead at ~ 50% of her/his effort 

Collaborators

From PET: Fabrice Beretta, Adel Elsayed, and from CRS: Eunice Stephens (ops manager), Kathy Burkart (finance manager), Deanna Sheeley (Research Nursing Core Dir).

Comments

Nice idea that we have discussed for some time but not with this level of detail.  Perhaps you could take two programs--one currently using CRS and one not--as the pilots.  I would not mix technology with this; adds complexity into a project that is mostly focused on service delivery. 

Thanks for the comment Clay. I really like the idea behind choosing two-programs, it'll help pave the way to better understand the process/challenges of delivering these services beyond CRS for potential expansion. 

Commenting is closed.

The Brain Initiative Outreach Proposal

Proposal Status: 

Rationale

The overall long term goal of The Brain Initiative is to create a large internet-based registry of subjects interested in participating in neuroscience research. At launch, we will be recruiting individuals 55 years and older living in the San Francisco Bay Area. The Registry will serve as a base for future recruitment of subjects for studies on prevention of Alzheimer’s, Parkinson’s and other Neurodegenerative diseases. This project has been designed in order to address a major challenge in the conduct of clinical research, the limitations on getting enough subjects for large prevention trials of neurodegenerative diseases. By utilizing online tools, including the registry, the website and a large outreach effort we expect to create a large pool of subjects in the San Francisco Bay Area that will be willing to participate in Prevention Trials. The Registry is innovative because it will include a consent form, a set of questionnaires, several on-line neuropsychological tests, and will be designed for longitudinal followup of registered subjects. We have already obtained funds for the creation of the registry. However, in order to enroll and longitudinally follow a large number of participants the project will require very substantial marketing, advertising and community outreach efforts. This application is to request funds for the marketing, advertising and outreach efforts.

The generalizability of The Brain Initiative applies in several areas:  First, this project is generalizable to all areas of neuroscience translational research, since the project will recruit subjects interested in participating in studies on all areas of brain research. Second, we will be working with CTSI to market and advertise The Brain Initiative to underserved minorities including African Americans, Hispanics, and Asians, as well to the LGBT community. Third, this project will serve as a template to other registries and recruitment outreach programs in cancer, cardiovascular disease, and other disorders. The scalability of The Brain Initiative will be achieved through the capacity of internet based registries to scale to large populations. Initially we will focus on recruitment in the Northern California area. However, we have already been in discussion with investigators in Los Angeles, New York, North Carolina and other areas who might be interested in using The Brain Initiative website/registry for recruitment of subjects in their regions. In addition, we are in discussions with Bruno Vellas (Toulouse, France), Simon Lovestone (London England), Merce Boada ( Barcelona, Spain) and Nikklas Mattsen (Gothenberg, Sweden) concerning the extension of The Brain Initiative to their countries, and more broadly internationally. 

Plan

We plan to complete The Brain Initiative’s website and registry by May 31st 2013.  Once all technical elements are finalized and thoroughly tested, the advertising and outreach efforts will commence. The plan is to perform the following activities: 1) Digital outreach: a. Facebook, Twitter and other social networking sites. b. Online banners, Google adwords and other online advertising. c. Print advertising including mass mailings. d. Press releases to various newspapers, radio and TV stations throughout the Bay Area. 2)Community Outreach. a. Large Community event (1). b. Small community event (1)

Criteria and metrics for success

With the effort and budget that we propose we anticipate that we will be able to register the first 2,000 participants into The Brain Initiative. Here are the milestones for the period of 6/1/13 – 5/31/14: Milestone 1: By 6/30/13, 100 subjects recruited in The Brain Initiative. Milestone 1: By 12/31/13, 750 subjects recruited in The Brain Initiative. Milestone 1: By 5/31/14, 2,000 subjects recruited in The Brain Initiative.

Cost and justification

We estimate that the overall costs will be $50,000 for both the Digital outreach and the Community outreach. The costs are as follows: 1)Outreach Coordination: $11,732 in personnel costs 2) Digital Outreach Costs: a. Personnel Costs: $8,978 for posting and engaging participants b. Vendor Costs: $13,000 for online advertising. 3) Community Outreach: a. Personnel Cost: $4,290 for coordination of events. b. Large Community Event: $10,000 in overall costs. C. Small Community Event: 2,000 in overall costs

Collaborators

Dr. Michael Weiner, Dr. Scott Mackin

Comments

Great project but to be eligible for this funding mechanism projects need to show generalizability and scalability in advancing the conduct of translational research. Could you amend or perhaps this is not a good match for this particular funding mechanism.

We have modified the proposal to reflect generalizability and scalability, but overall here is what I think:

The generalizability of The Brain Initiative applies in several areas:  First, this project is generalizable to all areas of neuroscience translational research, since the project will recruit subjects interested in participating in studies on all areas of brain research. Second, we will be working with CTSI to market and advertise The Brain Initiative to underserved minorities including African Americans, Hispanics, and Asians, as well to the LGBT community. Third, this project will serve as a template to other registries and recruitment outreach programs in cancer, cardiovascular disease, and other disorders. The scalability of The Brain Initiative will be achieved through the capacity of internet based registries to scale to large populations. Initially we will focus on recruitment in the Northern California area. However, we have already been in discussion with investigators in Los Angeles, New York, North Carolina and other areas who might be interested in using The Brain Initiative website/registry for recruitment of subjects in their regions. In addition, we are in discussions with Bruno Vellas (Toulouse, France), Simon Lovestone (London England), Merce Boada ( Barcelona, Spain) and Nikklas Mattsen (Gothenberg, Sweden) concerning the extension of The Brain Initiative to their countries, and more broadly internationally.

Commenting is closed.

Integration of Clinical and Administrative Data to Measure the Value of Orthopaedic Care

Proposal Status: 

Rationale:

Disorders of the musculoskeletal system present a significant burden to our healthcare economy. Musculoskeletal disorders are the largest cause of disability in the United States, accounting for more than half of chronic disease in patients over age 50 in developed countries.[i] The sum of direct and indirect costs for patients with musculoskeletal disorders in the United States has been estimated to be $849 billion dollars, or 7.7% of the national gross domestic product.[ii]  In addition to the financial cost, orthopaedic disorders diminish health-related quality of life. The demonstration of quality and value of care is necessary for defining the role and effectiveness of orthopaedic surgery interventions in the health care economy.  The purpose of this proposal is to create an infrastructure to combine administrative data on cost and quality metrics with clinical data on patient-based health status to measure the value of orthopaedic interventions, and to guide changes in care pathways with a goal of optimization of the value of care.

 

Optimization of value is an important goal of healthcare.  Administrative datasets have been useful in measuring quality of care metrics including rates of complication and readmission and in detailing the cost of care.  However, administrative data offers little insight into patient preference for health states, change in health status, and the value of care. The approach of integrating clinical outcomes with cost data provides an opportunity for moving beyond the use of process measures alone for measuring quality to providing critical information about the true value of care. This model provides a rational approach to making critical health care decisions and will be an increasing priority as we move towards a value-based health care system.

 

Plan:

The Department of Orthopaedic Surgery has the experience and an established infrastructure to routinely collect and analyze health related quality of life outcomes through the Surgical Information Datasystem (SID).  We have collected patient-based health status information on over 90% of our elective patients.  We have developed methodologies to estimate utilities of health states with the EuroQol 5D, Oswestry Disability Index and the Neck Disability Index.  These health status measures permit an estimation of outcome measured in quality adjusted life years (QALYs).  Combining administrative data that permits comprehensive identification of cases and costs with patient-centered data will enable us to measure cost per QALY uniformly on orthopaedic cases. Our goal is to develop an infrastructure to combine the clinical outcomes data with hospital administrative data to measure the value of care. This will be accomplished through the creation of a data registry system that integrates clinical outcome and hospital administrative data. The registry will be established in collaboration with a medical informaticist with expertise in creating and implementing integrated data repository architecture that facilitates registry queries for the reporting of quality of care outcomes.

 

Criteria and Metrics for Success

Processes for reporting value of care metrics will be established at weekly departmental research meetings. Metrics for success will include ability to identify low value interventions and services and establish initiatives to improve the value of these interventions. Assessment of the value of interventions will be guided by demonstration of improvement in outcomes and/or reduction in costs.

 

Approximate Cost and Justification

$50,000 for the establishment of an integrated registry, staff training, data analysis and reporting.

 

Collaborators

UCSF Department of Orthopaedic Surgery- Divisions of Arthroplasty and Spine

Kevin Bozic,MD

Sigurd Berven, MD

Steven Takemoto, PhD

 



[i] The Burden of Musculoskeletal Diseases in the United States. http://www.boneandjointburden.org/ Accessed 7/21/11

[ii] The Burden of Musculoskeletal Diseases in the United States. Chapter 9: Healthcare Utilization and Economic Cost.  http://www.boneandjointburden.org/ Accessed 7/21/11

Comments

Though interesting, this proposal is focused on advancing a particular area of research. To be eligible projects need to show generalizability and scalability in advancing the conduct of translational research.

This proposal encompasses elective surgeries for the arthroplasty and spine surgery, the largest volume of inpatient care at UCSF.  After successful implementation of an integrated date repository for these services, the infrastructure may be extended to include all elective surgeries done at UCSF.

 

Comment by Sigurd Berven, MD

there is a machine-learning system in place that has been used in preop and transplant medicine departments that acts to capture ongoing medical histories directly from patients and integrate the data in a repository with their medical record data. I would be happy to connect you that team as potential collaborators.

Commenting is closed.

UCSF Social Media Boot Camp for Scientists

Topics: 
Proposal Status: 

Rationale:

The UCSF Social Media Boot Camp for Scientists (official name to be considered based on comment received) is designed to help researchers explore social media as a tool to achieve various goals, including wider exposure for their research, connecting with potential funding opportunities, cultivating collaborations, and increasing the impact of research, to name a few.

A NatureJobs blog post, Social Media Tips for Scientists, puts it well:

For many scientists, the thought of spending time on social media sites is distinctly unappealing. To some it’s just a question of time: why add to that to-do list which is already long enough? For others it’s more to do with social media itself, finding the idea of sharing thoughts and ideas with the whole world pointless or self-indulgent.

“If that sounds like you, it might be time to reconsider your options – social media includes much more than the usual suspects like Facebook and Twitter, and there are even sites dedicated to academics. Indeed, a vast number of scientists are using social media for tremendous gains – whether that be forming new contacts and collaborations, sharing ideas, communicating science, inspiring others or just entertaining them. Why not join them?”

A series on Science Marketing featured in Nature Materials notes:

Today, researchers have to make their publications stand out from the stack of nearly 800,000 science and engineering manuscripts that are published each year, recent PhD graduates and postdocs face historically low employment prospects in academia, and principal investigators compete over shrinking government funding.”

Beyond that, researchers are noting that the unique aspects of social media are changing the way science is talked about, and scientists have even noted that social media tools help to clarify thinking, demystify the scientific process, and spark new ideas.

Goals:

The UCSF Social Media Boot Camp for Scientists will:

  1. Offer useful information about social media specifically targeted at scientific researchers and the academic community;
  2. Address the challenges, myths and potential misconceptions about social media within the scientific community;
  3. Support researchers in developing individual goals for social media, identifying tools that meet their needs, and establishing a social media presence;
  4. Support community building among those within the UCSF campus community who are actively engaged in social media;
  5. Create a “starter kit” and related resources for other academic institutions interested in a similar effort;
  6. Develop a publicly available online resource featuring all UCSF Social Media Boot Camp for Scientists sessions;
  7. Launch a social media consultation service as part of CTSI’s Consultation Services program, which already offers expert advice for researchers in 18 subject areas.

Plan:

The CTSI Communications team (Communications Director John Daigre, and Communications Manager Nooshin Latour) will lead the effort to establish a Planning & Implementation Team that includes UCSF communicators, researchers already active on social media, other universities, external Bay Area companies involved in social media, and others. UCSF University Relations has agreed to be a primary partner in this effort. Additionally, an important component of the project will be to enlist UCSF researchers to share their personal experiences, challenges, and successes involving social media.

All related events will be free and open to all at UCSF (ideally events will be available via webinars or similar for a broader audience).

The format of the Social Media Boot Camp for Scientists will be finalized with input from the Planning & Implementation Team. Events may be organized into one primary event (i.e. a one-day boot camp), or spread out over several months depending on feedback from organizers and partners.

Will consider piloting with other groups, such as K Scholars, as suggested in comments.

Examples of potential types of presentations and panel discussions:

  1. Marketing for Scientists: Thinking Beyond Self Promotion
  2. Understanding the Social Media Landscape
  3. Demystifying Social Media
  4. Social Media 101: Getting Started
  5. Twitter: It May Be More Than You Think
  6. Social Media Rules of Engagement: Risks and Rewards
  7. Social Media Advice for Physicians
  8. Making the Most of Videos and Podcasts (UCTV, iTunes, etc)
  9. Your Online Profile: LinkedIn, ResearchGate, UCSF Profiles and more
  10. Social Media Networking: What Tools Are Right for You?
  11. Academic Blogging
  12. Speaking with One Voice: Integrating Your Social Media Efforts
  13. The Future of Crowdfunding Science
  14. Altmetrics and Non-Traditional Research Impact Measures
  15. Using Social Media for Clinical Research Studies (suggested via commenting)
  16. Connecting with Other Researchers vs. The Public (suggested via commenting)
  17. Smart Social Media: Tools for Better, Faster Communication (suggested via commenting)

Impact/Value:

The UCSF Social Media Bootcamp for Scientists will:

  1. Support researchers’ efforts to achieve their individual goals (including career development, as noted in comments);
  2. Support community building on campus;
  3. Provide ongoing support (i.e. videos, consultation service) available to researchers interested in social media;
  4. Support amplification of UCSF’s brand as a preeminent health sciences innovator;
  5. Provide tools for other institutions to use in the development of similar events;
  6. Strengthen UCSF ties with external partners (i.e. Bay Area companies).

Metrics for Success: (TBD based on finalized event format)

  1. Participation of faculty/staff (~250+)
  2. Collaboration with UCSF faculty active in social media (~10+)
  3. Collaboration with campus groups (~10+)
  4. Collaboration with other universities (~1+)
  5. Collaboration with Bay Area companies (~3+)
  6. Satisfaction survey (75%+ satisfied)
  7. Other metrics TBD

Proposed Budget:

A proposed budget of $19,500 includes costs for coordination, meeting space, recording events, printing, promotion, incentives, guest speakers, etc.

Potential Collaborators:

- UCSF University Relations (confirmed via comments)

- UCSF Library (confirmed via comments)

- UCSF School of Medicine

- UCSF researchers active on social media (Bradley Voytek confirmed via comments)

- CTSI Online Learning (as suggested in comments)

- Other campus groups via UCSF Communicators Network

- Bay Area companies working with Social Media

- Other academic institutions

Comments

Great idea! Would love to see the active involvement of researchers who use social media as a central part of the development of this initiative. Examples of possible parters include Judy Tan, Bradley Voytek, Aaron Neinstein, Matt Cooperberg, Ken Covinsky, Urmimala Sarkar, Paul Abramson, Bob Wachter.

 

The UCSF Library may also be a good partner on this; their The Better Presenter series is analogous in many ways.

Thanks, Anirvan. We would definitely involve researchers who are actively involved in social media, and your suggestions are great (and I've already been in touch with some of them).

Thanks for thinking of the Library, Anirvan. We are definitely interested in this project.

 

The recent Synapse article about the Journal Lab highlights some of the possibilities for social media to facilitate conversations about research. (http://synapse.ucsf.edu/articles/2013/02/20/startup-ucsf-journal-lab-cou...) And we see growing interest in tools like Mendeley etc. 

 

We are always interested in projects that propose new ways to interact with the published literature and that promote collaboration. 

 

And we also see this as an opportunity to further our own understanding of how researchers use and interact with the literature.

 

Sorry - I did not mean my comment to be anonymous. Gail Persily from UCSF Library wrote the comment above.

Thanks for your comments, Gail. You make a really good point about also using this as an opportunity to better understand how researchers use and interact with the literature. I'm glad to hear you're on board and look forward to collaborating.

I think there is real value in this proposal. I use Journal Lab and think these type of tools are great for researchers. A potential improvement to your proposal would be to stress how social media tools, such as journal lab, are not adding to the to-do list, but rather replacing other items. We all have to read literature already, these tools just make it faster and more productive.

Thanks for brining us up! The Journal Lab team would love to be involved here. This may be covered under existing topics, but I'd love to see a focus on social media as a meaningful source of information as well as a way to disseminate it. Used correctly traditional social media and emerging tools like Journal Lab, Figshare and Impact Story can improve people's ability to find relevant papers, assess which methods to use, discover potential collaborators, etc. With the increasing deluge of information out there, setting up a good information sources and filters is a vital skill.   

Nice idea John. I am curious to know about any expressed interest as of yet and if there are specific groups that may benefit to piloting these ideas? Fellows, K Scholars etc. 

Thanks for your comment, Courtney. I'm continuing to connect with UCSF researchers active on social media and other communicators on campus to include them as collaborators in this effort. And I like your idea of possible pilot efforts with target groups such as K Scholars. Targeting various groups (i.e. those who have never used social media; those who use it only for personal interactions; those who may use only one social media tool; those interested in learning more about integrating various tools, etc) will be a focus of the boot camp.

Love this idea, especially giving best practices and practical incentives for participation.  It would be a terrific way to encourage engagement AND start people off with ready made compliance with UCSF identity standards/social media guidelines.  Seems like the most common questions are ʻhow do I find time?ʻ ʻwhat do I have to say?ʻ ʻ how do I start?ʻ  and most importantly ʻwhat is the benefit to me or UCSF?ʻ  - your outline above would address those beautifully.

Thanks, Karen. I'm looking forward to having you as a partner in this effort.

Karen's totally on spot here. This is a great proposal and I really like the idea of a systemic, smart, rational approach to communicating the pros and cons of social media with UCSF staff, faculty, and researchers. A lot of folks are using these tools anyway, so having a conversation about what the future of this kind of communication means for science is very important.

Thanks, Bradley. I look forward to working with you and other researchers who are active on social media as we develop this project. John

Sounds like a great project - we are already seeing the early adopters jump online, but I hope the campus can come together to offer more counsel for others while balancing the individual and organizational benefits.

Hi John,

 

This is a really neat idea, and I'm excited to see others thinking about how scientists can reach out to the community. It would also be interesting to address the differences between how research connect with other researchers and how they connect with the lay public. 

Excellent idea. Consider collaborating with the CTSI Online Learning Program to produce training modules.

Another great focus area for this proposal would be in career development and exploration. You touch on this with the reference to LinkedIn, but this is a major 'pain point' for senior graduate students. ~90% will not have academic careers, yet do not start to explore their career options till ~6 months prior to graduation. Incidentally, students would make a great demographic for testing social media usage in research and science.

Michael, you bring up yet another potential benefit of social media. Thanks...I've incorporated this suggestion into the proposal.

You have identified a critical, important need that is not readily recognized by scientists, faculty, or other scholars.  Too many are "stuck" in an "older age" of communication, being only slightly faster than snail-mail.  Getting them "unstuck" is an important goal, and may be a lot harder than one would expect (or hope).

   My proposal (Testing new Web-based software for increasing the speed of knowledge creation from translational and inter-disciplinary projects) fits in directly with this effort, where we are creating software to make it easy for academics to create highly-moderated Blogs directly related to their academic goals and results.  So I would hope to be able to contribute to your presentations, and to any compilations of resources that you create.  As you learn what wording and presentation "works" with academics, I want to know that, too.  Similarly, my

proposal seeks to find out what presentation is most effective in recruiting CTSI K Scholars, Fellows, and PostDocs to creating their own Blog; this can feed into your presentation of Blogs. 

  Now an important critique:  As an Academic, I am TOTALLY TURNED OFF by the words "Social Media".  I've kept off of Facebook and Twitter because of both time, and the mis-use that has occurred via these media.  So, your title does NOT catch my interest.  Here is where some brainstorming is needed.  What about "Scientific Media"  or "Online Media directed to scientific goals" or "Networking among scientists using Web-based Media" ?  I would gladly help with this.

  The terminology is important. In my propsal we call a scientifically-oriented Blog a WebCompendium (which we hope academics might want to author or contribute to).  We say that a WebCompendium automatically creates a "community of like-minded scholars from which collaborations and job-offers might arise".  The "collaborations" are of interest to established researchers, and "job-offers" are of interest to those in training. The "Like-minded scholars" term was the best that we thought up, but by brainstorming with you, we might find an even better description.  Alas, little details like this can be important when trying to catch the "attention" of those that would benefit!! 

  Another question arises as to whether one can DIRECTLY use Media AS-IS, or whether there needs to be an adaptation of a given OpenSource Media to make it more easily useable, or in some other way fit the needs of specific scholars.  Indeed, perhaps some of your early Webinars, Seminars, etc. might be essentially "Focus Groups" from which you could ascertain what keeps scholars from NOT using Media.  Then, the susequent presentations could be based upon recruiting scholars to use these modified-forms of Media.  I'm sure that I would be turned off by being told about Media that has flaws when used by scholars/academics.  I'm enough paranoid about this, that I'm trying to find the flaws in WebCompendia (using my proposal) BEFORE releasing the software.

  Finally, I don't think you have addressed the issue of the AGE of the Scientists that you are trying to influence.  Those who are young enough to have never lived in a world without an Internet/Web may have different views about Media, as compared with Emeriti that still have trouble with a Browser.  So, some part of your studies might address this directly.  Again, Focus-Groups of different ages might be enough.

   This project melds nicely with my project, and I hope we can collaborate in both projects.  A win-win, clearly.

Thanks for your detailed feedback, Don. You've made some good points, and I'll incorporate into the proposal.

I've free-associated about my reaction to the term "social networking".  "Networking" is good, and essential for every scientist.  But "social" brings to mind Chairpersons, Deans, and Chancellors who booze-up the rich for BIG money to support good sounding phrases, but not detailed research.  Clearly I am VERY biased!!  But those are my associations. 

Great idea, John! There is definitely a need out there for this type of resource and one that I anticipate getting a lot of interest from not just faculty but staff as well. I particularly like the “Starter Kit” idea. Providing tools, templates, and step-by-step guidelines are always helpful to teach to the masses. You've highlighted some great topic areas. You might want to consider adding an area (at least for researchers/investigators) that discusses how to use social media for clinical research studies. We get asked that question a lot – what can/can’t we do or say on social media, where does the line stand on being informational vs. creeping into patient recruitment space. Many researchers get concerned about patient privacy etc., and are hesitant to use social media as a tool to promote research studies. Currently there is no “official” established FDA guidelines on social media, but there are tips and things to be aware of when considering using social media in that way. 

Tracy, great point, and your understanding of using social media for clinical studies will be very helpful in this effort. I've added this as an area of focus for the proposal.

Commenting is closed.

Publicly Searchable Database of Recruiting Studies

Proposal Status: 

Publicly Searchable Database of Recruiting Studies

 

1.    Rationale

There are multiple sources for clinical trial listings at UCSF that are available to the general public via the web. These sources draw their information from the national ClinicalTrials.gov registration system that is required prior to recruitment into an interventional drug, biologic or device clinical trial. While this is a trusted source of clinical trials information there is significant opportunity for improvement with regard to how potential research participants locate, search, and receive accurate and information that will lead to informed interest and participation in research studies at UCSF. By limiting searches for UCSF clinical studies to clinical trials registered with ClinicalTrials.gov the major deficiencies include:

  • Registration is only required for interventional drug, biologic or device clinical trials – representing a limited proportion of all UCSF clinical research studies
  • Post-registration updates to recruitment status and contact information not required or enforced and are therefore often outdated
  • Summaries presented to general public are not presented in lay terms as derived directly from registration form

We feel that by creating a UCSF-centric listing with the public’s interest in mind that we can not only provide a valuable service to our patients and potential research participants, but can facilitate increased interest and enrollment into UCSF clinical research studies.

 

2.    Plan

We propose to incorporate a searchable database of recruiting studies at UCSF in a simple web-based format that is uniform and appropriately written for the general public while leveraging existing and planned participant recruitment resources. We will focus on the following approach in an effort to improve accessibility to information on recruiting clinical studies at UCSF:

  • Create a basic website with educational information on clinical research participation and a current listing of all UCSF studies that are recruiting, with GoogleSiteSearch functionality
  • Develop a structured template for describing clinical studies at UCSF in lay language and provide examples and best practices to researchers on how to describe their study
  • Obtain monthly reports from iMedRis of studies approved for participant recruitment; provide investigators with the template and notification of listing opportunity
  • Upon receipt of completed study description, provide editorial by medical writer for approval by investigator
  • Post study to website for search by the general public, including contact information for study staff
  • Provide monthly updates to website listings and contact information for study staff
  •  

3.    Criteria and Metrics for Success

Metrics include number of studies engaged and posted to website (interventional clinical trial vs. non), number of searches per month, number of studies accurately reflective of currently enrolling studies at UCSF per iMedRis as compared to ClinicalTrials.gov listings at quarterly time points.

 

4.    Approximate Cost and Justification

$44,000 for costs of web programming, search capabilities, 30%FTE medical writer and 10%FTE technical coordinator.

 

5.    Collaborators

iMedRis administrators, ISU

 

Comments

This is a great idea! The key to this is making the data easily discoverable and reusable.

Questions:

  • I presume this will apply only to current studies. How long will this level of funding allow this project to run?
  • How do you plan to make your enriched data reusable by others at UCSF and outside? Are there standard data formats or ontologies being used to richly describe studies, which we can piggyback on top of? Is there at least one downstream data user you can be working with?
  • How will the public find this website? How will it relate to other recruiting-related websites at UCSF?
  • What levels of utilization would you need for this project to be cost-effective?

(In terms of data reusability, there are a variety of groups around campus using iMedRIS data, or who could benefit from it. I presume you've already talked to Fabrice. You may also check in with Oksana and the GHS team, who are enriching/correcting iMedRIS data to meet the needs of global health researchers, and I believe CFAR and the Cancer Center may be doing the same with either iMedRIS or ClinicalTrials.gov data. Paul Volberding was brainstorming about the idea of having a common system for multiple groups to enrich iMedRIS data, instead of everyone doing it by themselves in little non-interoperable silos. For example, a number of groups could conceptually work together to crowdsource MeSH-tagging all UCSF studies.)

Thanks for your thoughful feedback Anirvan. This would initially apply to current studies but would be updated monthly to ansure accuracy. This funding would allow the project to run for one year. Standard ontologies should definately be used to the extent that they exist in the vernacular of the layperson seeking information on research studies. 

 

Out intent is to marry this information on the web along with the UCSF Research Participant Registry. Leveraging current methods used to increase traffic to the Registry, the CTSI websites and the Medical Center would be optimal. I know that many other conversations within and extrernal to CTSI are happening around the capture and use of iMedRis data and would definately hope to work together on common needs so that we can move away from silos, which is a goal of this project as well.

Linking with iMedris data seems to be the key to collecting the most accurate data on actively recruiting studies at UCSF. I think this project would prove to be a great resource for the 'Creating Personalized Connection to Research Interests vis the Research Participant Registry' proposal- especially if the plan is to house this database of recruiting studies on the Regsitry website. Linking the study status data to investigator's UCSF Profiles pages will only help the visability and searchability, since Profiles is so well established on the web. Making the study summaries in laymen terms gives it a leg up on the clintrials.gov data pull

Nariman,

This comment is on the first phase of your plan - creating a dedicated web resource to list all current clinical studies at UCSF, based on the iMedRIS information. Could Accelerate.ucsf.edu serve as the "home" for this ? It is an established website with a long history on the web. Being well known to search engines, it would have a certain advantage in Google search ranking (much better discoverability from the beginning), compared to a brand new website. Besides, this service seems to perfectly fit the purpose of the accelerate.ucsf.edu as point of access to CTSI services for researchers and community. Adding a section on the existing website would be more cost effective compared to creating a new (even basic) separate website.

 

Our plan was to house this along with the UCSF Research Participant Registry, which will inlcude expanded educational information on research participation, and is a web resource dedicated entirely to the general public interested in joining research studies. However, you do raise an excellent point about the advantages of the accelerate website and I think it's worth exploring how we can better connect the two to increase accessibility. I will add the virtual home team to the list of collaborators.

"Post-registration updates to recruitment status and contact information not required or enforced and are therefore often outdated"

enforcement has started and can result in fines in up to $10,000 per day.

 

it seems like rather than devising a system competing with ClinicalTrials.gov, having one which can leverage and work with ClinicalTrials.gov would be synergistic rather than creating extra work for PIs of trials required to report to ClincialTrials.gov.

Thanks for the information Stuart I was not aware that clinicaltrials.gov is now fining investigators for not updating their recruitment status. Can you provide any information or links to the policy regarding ho wlong after last participant is enrolled until investigators must make the update? thank you.

Commenting is closed.

Sharing Success - Making Open Proposals Self-Serve and Open Source

Proposal Status: 

Abstract. Develop an open source edition of the UCSF Open Proposals software suitable for deployment at external institutions. This will make UCSF's version easier to host and manage, enable other CTSAs to benefit from UCSF's work, and help establish UCSF's thought leadership in the field.

Rationale. In July 2012, CTSI launched UCSF Open Proposals, a web application enabling an open and collaborative community-based proposal submission process. The tool uses a crowdsourcing model to help researchers get valuable input on their proposal before submitting it to the review committee. It helps projects find collaborators, contributors, and advisers, and makes it easy for those with relevant expertise to offer feedback. Finally, it provides a stable platform for ideas even after the opportunity is closed, making it possible for the research community to link to, discuss, and build on previous ideas.

We see four fundamental technical limitations with the current Open Proposals system:

  1. The current version of the Open Proposals application was built in a way that’s very tightly integrated with accelerate.ucsf.edu, CTSI’s services portal. While this allowed us to launch faster, it prevents a standalone deployment, and makes upgrades more difficult.
  2. The design of the system requires the services of a professional web developer to create and configure even the most simple open proposals. This prevents CTSI from making open proposal deployment a turnkey process. At the same time, the system is mature enough at this point that it allows to customize each open proposal forum to fit the initiative-specific needs, at a relatively low cost (but it does require professional resources).
  3. The system is designed in a way where UCSF-specific authentication and identity data access mechanisms are baked directly in, instead of via an open pluggable process. This means that even if the software were to be shared with an outside partner, they would not be able to deploy it at their institution without removing and rewriting UCSF-specific code.
  4. The software is not open source. This prevents UCSF from sharing its work with outside partners, and makes it more difficult to discuss it in venues like the CTSA Toolshop. (At least one major CTSA partner has approached UCSF, wanting to know about the Open Proposals process and software.)

We propose developing a fully standalone open source version of the software, with a flexible authentication and identity backend which can integrate with solutions already in use at UCSF and other institutions. This will allow us to improve usability, maximize national impact, and pay off technical debt.

The solution would fully address limitations 1,2 and 4 and will be a stepping stone on the way to a turnkey solution described in #3.

Plan
In order to convert UCSF Open Proposals to a portable open-source product we need to make the following changes to the application:

  1. Allow system to stand alone by removing dependencies on accelerate.ucsf.edu
  2. Repackage the application to a distributable set of Drupal modules
  3. Implement support for any common network identity provider to support single sign on to the customer’s organization network
  4. Obtain approval from the Office of Technology Management to offer the product under a suitable open source license
  5. Enable external code contributions by hosting the code on an open source version management platform like Github or Bitbucket
  6. Ensure discoverability by listing the product and supporting user documentation on relevant repositories, such as Drupal.org Projects


In parallel, we will be working with several research institutions to find at least one external partner interested in pioneering Open Proposals at their organization, and help initiate a pilot Open proposal opportunity at their institution.

Criteria and metrics for success
This project will be a success if:

  1. An easy to deploy, standalone open source application is made available for the public via the established open source distribution portals
  2. UCSF is able to use the open source application to host its own Open Proposals instance
  3. The application is easy to manage. Application administrator can configure and activate a standard open proposal forum with reasonably low amount of professional assistance from web developer.
  4. An interested partner is found who is willing to pilot an open proposals opportunity at their organization
  5. The partner has installed Open Proposals product and successfully configured a test forum
  6. We receive satisfactory feedback on product’s usability and performance from the partner’s implementation team


Approximate cost and very brief justification
Estimated cost of development: $18,000. This includes anticipated development effort and about 10% contingency.  There may be additional cost of managing communications with the partner (TBD).

Collaborators
Cynthia Piontkowski – Web Producer, User Experience Designer, front end developer for UCSF Open Proposals
Brad Bulger – Drupal and Web Development Expert, Solution Architect, back end developer for UCSF Open Proposals
Anirvan Chatterjee – Solution Architect, Open Source Technology Expert

John Daigre - Communications Director
Oksana Gologorskaya – Product/Project manager, User Experience Designer

 

Comments

This proposal is very cogent, considered, and appropriate, with potential for greatly improving the tool's hosting and use, among other CTSA's and within the broader scientific community.

The "Plan" to implement the proposal is very clearly defined, with specific, seemingly feasable actions needed, at a minimal expense. The "Rationale" presents a robust case for pursing them. I especially appreciate that "Criteria and metrics for success" include satisfactory end-user feedback, from the future partner's implementation team. Partner implementation would be the crucial outcome, so attention to this feedback would benefit future iterations and amplifications.

It's clear that the proposal's authors have very thoroughly analyzed Open Proposals' current requirements and limitations, and future potential. They have produced a very appropriate proposal that would, if implemented, more closely align the tool with the developers' mission: to "enable transparent and collaborative proposal development [and] promote multi-disciplinary team building." I hope this award's reviewers will consider the potential impact this proposal has for optimizing this type of collaboration within the broader community.

 

The Open Proposal platform is an innovative tool that represents an important shift in the paradigm of research funding practices. Early success has demonstrated proof-of-concept, and in order to move the concept forward, there is a clear need to expand access by making the software Open Source. The proposal mentions working with external partners, and this aspect is key. If motivated partners could be identified, perhaps in-kind development support could be provided, distributing the burden of resources and making the decision to support this proposal an obvious YES! 

I support this proposal and would like to collaborate to help with partnership building and outreach to other institutions. As part of the 60-member Clinicial and Translational Science Award (CTSA) consortium that includes the nation's leading academic medical institutions, CTSI is well-positioned to help spread the word about this innovative tool. As Communications Director at CTSI, I'm part of a national network of CTSA communicators, some of whom are already aware of this product and have expressed interest in using it if it becomes available. Additionally, the open-source aspect of this project exemplifies the intent of the CTSA consortium as a venue for sharing solutions, innovative ideas and best practices.

John, thank you for offering  your help! We are happy to have you on the team for this project.

Communicating effectively with the partner institution would be crucial for us to both help the partner succeed with their pilot initiative, and for us to learn as much as possible from their experience with it.

Commenting is closed.

Create Personalized Connections to Research Interests via the UCSF Research Participant Registry

Proposal Status: 

Rationale:

The UCSF Research Participant Registry (Registry) collects self-reported health information from volunteers and queries against study eligibility criteria for those investigators using the Participant Recruitment Service to enroll participants in their research studies via the Regsitry. This leaves a large gap of research studies that are not made readily available to potential research participants through this resource. For instance, a person may not have a specific condition for which they would appear in study-specific eligibility queries but are interested in a particular area, such as weight loss or nutrition studies. Simply asking Registry volunteers what they therapeutic areas or study types they are interested in doesn’t go far enough to close the gap for willing volunteers being connected to research of interest that could otherwise stimulate continued or expanded research participation.

 

Combining the resources of the Registry with the information on research at UCSF available in Profiles and the connectivity possible through outreach we feel we can close this gap. By providing Registry volunteers with a biannual feedback survey that specifically asks them which therapeutic areas or study types they have an interest in, we can then search UCSF Profiles and contact investigators to let them know of the willing pool of potential research participants whom they could then contact.

 

Plan:

We propose to develop brief web-based surveys bi-annually to collect feedback from current Registry participants about their experience with the Registry- inquiring specifically as to what therapeutic areas or types of research they are interested in.

  • Identify investigators and researcher via UCSF Profiles that are conducting research in therapeutic areas identified in the survey.
  • Use the data collected from UCSF Profiles to contact researchers notifying them of potential research participants for their studies.
  • The PRS outreach coordinator will liaise between investigator and participants to providing excellent customer service and general information on research participation.
  • Utilize and integrate UCSP Profiles API 

Criteria and metrics for success:

  • # Registry participants that respond to feedback survey
  • # different therapeutic areas identified as compared to those using the Registry
  • # connections/referrals made between Registry participants and researchers
  • Inter-departmental collaborations and satisfaction survey among UCSF researchers

 

Proposed Budget:

$25,000 includes costs for coordination, survey development and implementation, submission to CHR, collecting survey responses and reporting, querying and reporting database information on UCSF Profiles and UCSF Research Participant Registry, integrating  outreach and communication efforts with UCSF investigators, connecting potential participants directly with investigators.

 

Potential Collaborators:

Participant Recruitment Service

CTSI Virtual Home (UCSF Profiles)

UCSF HRPP (iMedris) -as suggested by comments

Comments

This would be a great use of UCSF Profiles APIs to support the development of research studies. Given a MeSH term, the Profiles software makes it easy to get information about top UCSF researchers associated with that topic. If this is an ongoing process, it would be simple to turn that into a script or a tool that can plug into your workflow.

Awesome idea, Anirvan! That should work out really well. 

Yes we always hope that we can somehow link everything related to recruitment of studies back to studies listed in profiles, since profiles is a well-established system and easily searchable.

It seems like this solution could provide more targeted referrals if/when UCSF Profiles has study data linked to researchers. Would it make sense to reachout to iMedRIS about potential collaboration?

Thanks for the suggestions, Brian & Anirvan. I agree, linking with iMedris would provide the most accurate info in terms of current research studies and Profiles provides the easy, searchable contact info for the investigator. Does UCSF Profiles have any plans to pull that data from iMedris? I think adding investigators current research protocols would be a nice edition to Profiles (maybe on one of the new tabs) not only for the general public looking for research studies to join but possibly fellow researchers looking for collaboration opportunities. Profiles just offers such a nice easy platform that compiles all relevant information including contact info. Potentially a 3-way collaboration?? I guess in the meantime, partnering with iMedris might be a good interim solution. If the "Publicly Searchable Database of Recruiting Studies" proposal gets support we'd definitely would be able to link volunteers through that database as well. 

the short-term solution would be to receive reports from imedris since it is not relying on short-term time-sensitivity.

I worry a bit that you might wind up referring people in the Registry to PIs at UCSF whose area of research might be of interest, but who don't have on-going studies that are enrolling participants. This could lead to a "dead end" and some frustration for Registry participants?

Hi Deb, thanks for your comments. You bring up a good point. We are definitely trying to alleviate that frustration and that's where the outreach coordinator role would be key in doing the navigation for the participants. The outreach coordinator would act as the liaison or navigator, to do the "heavy lifting" so to speak, between the Registry participant and PI/study coordinator by contacting that study team on behalf of the participant to confirm they are still enrolling patients, basic eligibility criteria, general study info, etc. The outreach coordinator could easily query the Registry participants self-reported health info (more or less pre-screening) to see if the patient might be eligible for the given study, and then circle back to the patient to let them know their options. It would be an added level of customer service that I think both the Registry participants and PIs would appreciate.

But you’re right, the key to efficiency and streamlining this process would be easy access to a searchable database of current UCSF active/recruiting clinical trials, and linking it to easy to find PI contact information (which Profiles provides).

 

Profiles is a great collaborative opportunity to leverage a successful and up-to-date data system that is accessed by the general public. Providing the customer service through an outreach coordinator will be key to bridging the gap.

seems like interfacing with the ClinicalTrials.gov website data to see the Recruitment Status for studies would be an important step as studies may be "Not yet recruiting", "Recruiting", "Active, not recruiting", Completed", or "Unknown".  In addition, the Inclusion and Exclusion Criteria can be obtained.

Commenting is closed.