2013 CTSI Annual Pilot Awards to Improve the Conduct of Research

To facilitate the development, conduct or analysis of clinical & translational research

Advocacy Impact on Research Agendas

Proposal Status: 

RATIONALE: Realizing that the benefits of the current revolution in biology and oncology would be enhanced by vigorous public support, for the past twenty years, the UCSF Breast Oncology Program (BOP) has implemented comprehensive strategies focused on leveraging advocacy engagements. To help transform the conduct of clinical and translational research, advocates support a wide range of NCI sponsored Specialized Programs on Research Excellence, (SPORE) and Physical Sciences and Oncology Center (PS-OC) Cancer Center projects, as well as multi-site grants sponsored by Department of Defense (DOD), Komen, California Breast Cancer Research Programs (CBCRP), Stand Up to Cancer (SU2C), and Translational Breast Cancer Research Consortium (TBCRC). Applying core principles that forge synergy with NCI Advocacy Research Working Group Recommendations: strategic innovation, collaborative execution, evidence based decision-making, and ethical codes of conduct, researchers and advocates interact across a broad spectrum of partnership modes at various levels of intensity. Participating in four areas: 1) research and programmatic support, 2) education and outreach, 3) policy and strategy, and 4) representation and advisory, UCSF breast cancer advocates are at the forefront of efforts to revamp translational research processes. For example, advocates involved in grants meet on an ad hoc basis with the study investigator and team. Infusing the patient lay perspective into discussions, trained research advocates vet hypothesis, define strategic priorities, address research challenges, and incorporate innovative science and study design into research projects/clinical trials. Importantly, promoting cross-sector and trans-network collaboration, advocates catalyze change in research practices by identifying systematic barriers to research efficacy, effectiveness, and expediency.

CHALLENGE: As champions of precision medicine, evidence based practices, Bayesian statistics, adaptive trial design, biospecimen standardization, biomarker validation, improved test result reproducibility, and informed consent reform, advocates are driving change in research practices as well as in FDA initiatives leading to new regulatory and interpretative mechanisms. Moreover, forging credibility, advocates now have a dramatic presence as authors in scientific papers, presenters at scientific conferences, voting members in scientific advisories, cooperative groups, data safety monitoring boards, planning committees, protocol/peer review committees, and informed consent working groups. Yet, despite the growing momentum of advocacy in the support and reform of clinical and translational research, collateral challenges remain. Because advocacy efforts often aim for outcomes that are hard to measure, capture, and operationalize, unique quantitative and qualitative approaches are needed to systematically assess the impact of advocacy messages and activities. Lessons learned from successes and failures will allow us to continuously optimize efforts and promote best practices to other UCSF advocacy groups.

PLAN & METRICS FOR SUCCESS: Three innovative methods are being developed specifically to respond to advocacy’s unique measurement challenges:

1.  A static table will be used to create an interactive model for measuring progress in real time.

2.  Stakeholder surveys that gather advocacy stakeholder perspectives and feedback will be used to assess the extent to which researchers support advocacy involvements and whether the support is changing over time.

3.  Logic models and systems mapping approaches will be developed by a committee of researchers and advocates to monitor performance, address challenges, and identify “best practices” and key entities to guide infrastructure improvements.

Arming stakeholders with the tools to make advocacy activities more transparent, recommendations more targeted, and outcomes more impactful, participants will formally assess the extent to which advocacy processes are improving the conduct of clinical and translational research.

JUSTIFICATION: Collaborative team science provides a starting point for comprehensive change and advocacy activities are being looked at as a shining example of how research advocacy will help spur medical innovation, democratize science, and expedite the incredible potential of future investments in bioscience. To sharpen and shape the vision for capacity building and bi-directional mentoring and transnetworking opportunities, advocates and other stakeholders will participate in two workshops specializing in evaluation and strategy development for advocacy and policy change efforts.

COST: A total budget of 50K is requested: Project coordinator 24K; Advocacy Training ProjectLEAD 5K; Workshops: 2 UCSF Workshops on Advocacy Impact Metrics 12K; Infrastructure:  Training Materials and Modules for UCSF Outreach 9K.

COLLABORATORS:  Susan Samson, Linda Vincent, Susie Brain, BOP science advocates, Hope Rugo (clinical lead PS-OC), Sarah Goins (coordinaor BOP), Laura van ’t Veer (Leader BOP), Susanne Hildebrand-Zanki (Associate Vice Chancellor Research), Elizabeth Boyd (Associate Vice Chancellor Ethics and Compliance)

Comments

It's exciting to see this process taking shape.  I believe the proposal would be strengthened by a clear goal statement up front, such as: to elucidate what is gained and lost through advocate involvement in the development and conduct of research, and in the grant application review process. 

 

Development of a logic model would be appropriate as first aim under that goal.  This process can be informed by open-ended interviews with researchers, policy-makers, and advocates (conducted by an evaluator or researcher with no vested interest).  A subsequent web-based survey might be developed based on the concepts identified in the interviews for measurement on a large scale.  It might also be important to aim to identify optimal characteristics of advocates associated with each process (eg training, personal experience, skills, etc).

 

It's not clear what the static table is.  Suggest either providing more detail or deleting.    

As indicated in the proposal, we need to acknowledge that advocacy programs and processes are hard to measure and operationalize. Once we have done so we need to understand that a comprehensive approach is required to elucidate what is gained and lost through advocate involvements. Without question, as suggested, the proposal will be strengthened by the progress and ongoing activities related to specific goals and aims. The major goal and associated aims are summarized below:

 

GOAL: Incorporate innovative evaluation methods for assessing advocacy involvements and policy change efforts in the development and conduct of research, in the grant review process, and in programmatic activities.

 

Aim1: Assemble a taskforce with researchers and advocates to develop a logic model as a valuable tool for program planning, development, and evaluation. The logic model connects the dots between resources, activities, outputs, and impacts.

Aim 2: Develop and implement stakeholder surveys or interviews that gather both advocacy and researcher perspectives and feedback.

Aim 3: Obtain case studies offering detailed descriptions, often qualitative, of individual advocacy training, prior experience, skills, strategies, efforts, and results.

Aim 4: Develop and implement a subsequent web based survey based on the concepts gleaned from the interviews. Advocates rate researchers and researchers rate advocates on scales that assess support for, and influence on, the issue.

Aim 5: Ensure that stakeholders have adequate training and mentoring in evaluation design.

 

We wish to clarify that the "static table" is a grid that highlights specific activities and outputs: i.e., meetings and workshops attended, coalitions formed, presentations made, etc. This does not address quality. The quality of outputs is assessed in Aims 1-5.

 

Importantly, we wish to emphasize that our program for integrating advocates in research is based on the premise that scientific progress requires multidisciplinary expertise and a new team science collaborative paradigm. All of us, researchers, clinicians, and advocates are responsible for promoting bi-directional science advocacy exchange. Without this shift, an understanding of how advocates catalyze change in research practices is improbable.

See the comment by Susan Samson, one of the Bay Area Breast Cancer Science Advocates, and co-writer of the proposal for the reply.

To develop a system evaluating and helping to better understand the important impact of advocates in translational sciences is of high importance. I suggest incorporating in this project as an additional long term aim/benefit that the lessons learned will enhance the ability to recruit new advocates for research projects.

Patient voices are becoming increasingly important in driving the research agenda. In particular in the ere of precision medicine where patient tailored screening and treatment management is advocated, only the consumers can indicate the riorities. To ensure a borad representation of opinions one of the goals of this project is to engage more science interests advocates.

I agree that it is important to develop quantifiable measures of having advocates involved as basic science transitions into individualized therapy. The proposal is a good beginning at developing ways of evaluating the impact of advocates. To develop such model it is essential to understand what is the role of the advocates, It is also important to recognize that the information must be bidirectional  the advocates must be educated to understand the nature of science and its limitations, for example, what  statistics really mean. The proposal needs more specifics on how feedback from such evaluations will be used to make the integration of advocates into the research process more effective.

 

As the proposal stands it has several strengths in that it proposes innovative methods. Building this new type of collaborative team science has the potential of creating more effective treatment as well as participation of patients in the development of the model. 

 

It also has some weaknesses. Overall, it is not clear over what time frame this evaluation will take place. It should take place over several years, so that improvements can be phased in. It needs some plans for how to involve the advocacy community even more effectively.

 

The language describing the proposed evaluation methods is not very accessible to non-experts. For example, in the first goal, I don’t understand what a static table is and how that contributes to the evaluation of the progress. Similarly, in the third goal, what are “logic models and systems mapping approaches”. Will they be training of the advocates and scientist/clinicians involved in the evaluation methodology

Hi Zena,

Thank you for acknowledging the diversity of needs, concerns, and values of advocates who participate in the BOP. And, thank-you for recognizing how the impact of advocacy on research agendas has been difficult to capture and measure.

Challenges notwithstanding, we argue for the importance of developing systematic approaches for gathering qualitative and quantitative information that can be used to consider the contributions and implications of advocacy in basic science and translational breast cancer research settings. Innovative methods are being developed specifically for assessing advocacy influences and policy change efforts in the support and reform of team science collaboration as well as research processes and methodologies.

Concretely, feedback from the evaluations will capture the ways advocates focus on consent practice improvements, clinical trial transparency, patient safety, biomarker validation, tissue acquisition, data reproducibility, data dissemination, and survivorship issues, among others. Evaluation approaches will determine how advocates trained in the basics of science and statistics, are bringing about changes in epistemic practices as well as the design and conduct of research. For example, as powerful non-traditional allies who focus on the values of  patient empowerment, their responses will indicate how they  address hurdles and bottlenecks in research processes, help establish formal mechanisms for moving highly technical evidence-based agendas into clinical practice, and thereby advance their own strategic goals within science. And yes feedback from the  evaluations will also offer insights into some of the tensions and limitations in the process, i.e., gaps in bidirectional mentoring and capacity building efforts.

We wish to  also clarify that:

1) Logic models and systems mapping approaches are evolving evaluation methods defining how data can be collected and used to identify advocacy goals, rationales, assumptions, resources, outputs/activities, long and short-term outcomes.  These methods are tools for assessing what strategies are working and what needs improvement.  For example, systems mapping is a qualitative approach that entails visually mapping a system to identify the parts and relationships that are expected to change, how the advocacy effort is trying to achieve change, and then identifying ways of capturing whether those changes have occurred.

2) In the proposed workshops, there will be training of both advocates, scientists, and clinicians on relevant, timely, and efficient evaluation methods.

3) The "static table" is a grid that highlights specific activities and outputs: i.e., meetings and workshops attended, coalitions formed, presentations made, etc. Please refer to the Samson reply of 3/15/13 for further thoughts on this issue.

 4) We agree. The time frame must take place over several years to determine whether a strategy is making progress or achieving its intended results.

 

 

-Susan Samson

Breast Oncology Program Science Advocate

Hi Laura

This is an incredibly important piece of research, yet needs to be implemented quite deliberatively. Have you thought of creating a structure within your research plan to establish a "governed" feedback loop for the community memebers concerns/needs to be heard, addressed and possibly implemented? 

Hi Courtney,

 

yes indeed we thought about such a governed feed-back system and that will be included in our development phase. We have several patient advocates in the group who will work on this, who all are extensively trained on advocacy participation in research agenda's. No one actually has really formalized feed-back loops and our intent is to indeed spewcifically provide that in this project.

 

Thanks for bringing it up,

 

Laura

Definitely a project that needs to be done, not only to demonstrate the great strides the UCSF breast cancer advocacy community has already made, but to aid in moving forward. 

It sounds like you are building the evaluation tools from scratch.  There are a number of advocacy evaluation tools/toolkits available that you may want to consider using, at least as a foundation to get things going.  The Robert Wood Johnson Foundation and Mathematica Policy Research put together an Advocacy Evaluation Toolkit that was used to evaluate an advocacy effort related to public policies and health insurance coverage.  It’s available on the RWJ website.  The Washington D.C. based Aspen Institute has an advocacy evaluation program and online tools as does Innovation Network. 

 

Hi Michael,

Great suggestions. We have already looked into resources/toolkits provided by Innovation Network, the Aspen Institute, as well as a plethora of material developed by the NIEHS and NIH on methods in advocacy evaluation. This proposal draws insights from these programs and we are guided by the experience and mentorship of  these thought leaders.

-Susan Samson

Breast Oncology Program Science Advocate

 

Commenting is closed.