2013 CTSI Annual Pilot Awards to Improve the Conduct of Research

To facilitate the development, conduct or analysis of clinical & translational research

MicroPub, A Publication Platform for Short Replication Studies

Proposal Status: 

1.    Rationale Modern basic science rewards large papers in highly cited journals.  However, it is difficult for translational and clinical researchers to assess the quality of a basic science paper.  A proxy is the number of citations, but that is a very inaccurate measure; some papers that cannot be replicated have hundreds of citations.  Given the lack of safeguards ensuring publication quality, the intense competition to produce high profile publications incentivizes publication bias (i.e. tendencies for journals to publish experiments confirming its original hypothesis or "positive" results).  An obvious way to affirm or dispute the quality of a paper is through replication.  Unfortunately, at present replication studies are not valued by the basic science community, and they typically go unpublished.  A large amount of translational and clinical research likely yields negative results because it was based on invalid basic science premises.  Our solution is MicroPub, a platform for soliciting and indexing “micro publications” that replicate data from published basic science articles.


2.    Plan 

We will create a website that publishes and indexes short replication studies.  Each “MicroPub” will contain an abstract, one figure, detailed methodology, and a short discussion.  Editorial moderation, rather than peer-review, will be used to evaluate the soundness of research. The short format will hasten the publication process.  Validation or refutation by independent investigators will serve as a high-quality measure of a finding’s reproducibility, and by extension, its validity.  This will be a critical resource for determining whether a basic science finding merits investigation at the translational/clinical level.  All papers will be Open Access. 


Initially, we will solicit MicroPubs from UCSF researchers.  We will utilize social media and traditional media outlets to popularize MicroPub.  Ultimately, we envision MicroPub being linked to the original article listings on PubMed, as well as related follow-up translational/clinical papers.   


2013-03-12 Addendum to Proposal:

We are discussing our goals with the UCSF Library, the Open Science Framework and the Reproducibility Initiative, and we have expanded our proposal as follows:

Many Small-Scale, Technical Basic Biological Experiments Can Only be Replicated by Academic Labs

Although core facilities and outside vendors are able to perform numerous specialized techniques at the same level or in some cases better than most academic labs, many high impact publications use novel methodologies that are technically challenging to replicate. These “artisanal” laboratories rely on apprentices (in the form of graduate students and postdocs), who learn from experts within the laboratory and then develop their own line of research. Thus, a paper can often only be replicated by members of the same academic lab or a competing academic lab.  There is no current outlet for publication of these replication studies, so they are very rarely published.  The scientific community misses out on this valuable information. MicroPub fills this niche.


MicroPub Re-aligns Cultural Incentives and Promotes Transparency

There are numerous reasons for the widespread lack of reproducibility in basic biomedical sciences.  But one reason is preeminent—you are not allowed to be wrong. This is institutional (NIH funding) and cultural. Given the current state of NIH funding, allowing the publication of a replication study that failed – admitting you are wrong – is to risk jeopardizing one's career.  And if many replication studies are never revealed to the wider scientific community, why do them? 


MicroPub aims to shift this culture.  It provides a venue that recognizes that irreproducibility often does not reflect scientific fraud or sloppiness, but may be due to other reasons.  Instead, irreproducibility is often due to subtle differences in experimental conditions or analyses, or publication bias.  Currently there is no published venue for discussion of these issues.  Although open access publications that utilize post-publication peer-review offer an avenue for online discussions, we feel that commenting per se does not carry the same strength as first-hand experimental data.  MicroPub provides a way to quickly publish first-hand data, and integrates it with open access and post-publication discussions.


Moreover, MicroPub provides a venue for scientists other than the initial authors to publish replication studies.  Right now, the peer-review barrier for doing so, particularly when a replication study contradicts the original study, is enormously high.  MicroPub is specifically devoted to studies of this type, so the barriers will be surmountable and the studies can be published and disseminated.


Of course, replication studies by initial authors will also be welcome.  It is often members of the same lab who replicate (or cannot replicate) each other’s results.  Again there are enormous disincentives (job security, NIH funding, reputation) against publishing such studies. MicroPub becomes a way for labs to publicly acknowledge and explore reasons for irreproducibility, and ultimately, to establish a reputation for honing long-term, course-correcting, scientifically valid results.  It is our hope that as cultural barriers against admitting error shift, so too will the institutional (NIH funding) barriers. 


2013-03-14 Addendum to Proposal:

MicroPub Keeps Track of Methodology Requests and Serves as a Detailed Methods Repository

One of the major barriers to replicating biological experiments is the unavailability of clear step-by-step methodologies and help with reagents and equipment. Due to journal space limitations, authors are unable to explain exactly how they did their experiments, and rely on citing previous publications that describe similar methods. However, methods are usually modified and these changes are often the key to getting the published results. As there is currently no accountability, requesting for protocols and reagents can be a slow, tedious process. MicroPub aims to solve this issue by providing a custom contact and tracking service that emails the first and the corresponding authors, explaining that a replicator wishes to replicate a particular figure in their publication. MicroPub will then announce this request on the website and keeps track of how long it has been since the original authors have been notified, and whether they have responded. These protocols are published onto MicroPub directly and become available to not only the replicator, but also to the wider community of scientists who may also wish repeat the experiment. To provide incentive, both the original authors and the replicator are credited for the methodology section. There are also cultural incentives, as collaborations are often predicated upon shared protocols and reagents. This act of sharing is one of the many benefits of attending scientific conferences, and MicroPub can provide a platform for these kinds of productive social interactions online.


3.    Criteria and metrics for success

MicroPub fits squarely within the mission of CTSI.  MicroPub “nurtures communication, encourages collaboration, fosters innovation, and catalyzes the successful conduct of research”.  Most importantly, MicroPub has potential to revolutionize translational and clinical research because it will ensure that researchers only pursue investigations on basic science findings that have been independently validated many times.  The success of the initiative will be measured by the total number of MicroPubs and by the number of MicroPubs used as justification for translational/clinical follow-up research.


4.    Approximate cost and very brief justification ($50K max)

The primary costs will be the development of a standardized publication format and of the website. Approximate initial cost: $35K


5.    Collaborators

We are a team of two, S.Y. Christin Chong, Ph.D., postdoctoral fellow, and Jonathan Russell, M.D.-Ph.D. student.  Both of us have extensive experience in scientific publication, science writing, and social media.  We will hire a developer experienced in creating web-based publishing platforms and social web services. 


Fascinating idea! How would MicroPubs compare to "traditional" research blogging?

It might be useful to find a partner at the UCSF Library, or the California Digital Library, as they also spend a good deal of time thinking about the future of research publishing.

Thank you for your comment! We see the future of MicroPub to be complementary to traditional research blogging, and there are three major factors that we think will differentiate MicroPub and ensure its success:

1. Generation of Primary Data
Traditional blogging materials are mostly consist of meta-research or opinion pieces, but MicroPub are based on primary research conducted by its author. "Action speaks louder than words," as the saying goes, and there is more implicit trust if original scientific articles are supported by independent replicated data.

2. Direct Incentives
MicroPubs aims to be considered a form of legitimate peer-reviewed publication. Although there have been recent discussions on including other forms of communication such as blogging when considering the scientific output of an individual, the reality is that peer-reviewed publications remain the gold standard when evaluating academic and industry job candidates. MicroPub will complement traditional peer-reviewed publications by showcasing solid scientific contributions that may not have resulted in publication otherwise.

3. Appropriate Audience
We feel that replicated data only makes sense if it is associated with the original article, because the intended audience is those who wish to know if the data presented by the original article is indeed reproducible. One of the goals of MicroPub (which differs from traditional research blogging) is to link replicated data with the original article on PubMed, which is the primary way used by most researchers to find scientific articles.

Thank you also for your suggestion on developing additional collaborative sources--we will contact the UCSF Library and the California Digital Library and tell them about our proposal with CTSI!

The Library is very interested in helping promote and support alternative publication models (alternative to the traditional subscription journal).  We are supportive of open access as a model to help disseminate and make use of research results. There are some tools available through the California Digital Library that might be helpful for this publication. We'd love to talk with you more about this proposal.

Dear Anneliese,


Thank you for your comment! We are very excited to collaborate with the Library and I hope to continue our discussion through email.

This is certainly an interesting idea, but you're 5 years too late: I needed you in grad school when I was futily trying to follow protocol from published papers!


My only concern is that with limited bandwidth and resources, researchers may not be too keen to spend the time and money on replicating experiments. Apart from a small set of scientists who may do one or two experiments "out of the goodness of their hearts," how do you envision a sustainable way to keep researchers motivated to conduct these experiments, knowing they're not being compensated in the "usual" ways (publications, etc.)?

Hi Erin,

Thank you for your comment--we definitely share your frustrations and want to prevent future budding scientists from experiencing the same! 

To address your concern:

As research is conducted from "the shoulders of giants," researchers often have to replicate studies as a prerequisite for exploring their own hypotheses based on these findings. However, these results just languish in lab notebooks because of the lack of avenue for publishing. If they're being done, why not have a quick and easy way to publish them?


Specifically, MicroPub can be a great way for graduate students to formally publish their first work. Often times graduate work is based on previous published data, and MicroPub can be a stepping stone towards learning how to put data together and present it to the scientific community.


Furthermore, as more postdocs are leaving for industry jobs, they may never get around to finishing their big paper! MicroPub can be used to demonstrate scientific productivity and proficiency in specific techniques sought out by biotech firms.

In addition, while we recognize that currently this type of publication would not be as "prestigious" as a normal paper, the goal is ultimately to elevate the worth of replication studies, since reproducibility separates science from mere anecdote, and too many papers nowadays are, alas, not reproducible. 

The idealism of this proposal is readily apparent.  I regret to present the realism.  Let's say, to start an area of investigation, the PreDoc tries to replicate a published result, and DOESN'T.  What are the alternative possibilities (see http://www.pubmedcentral.gov/articlerender.fcgi?artid=2048741) ?  One is that the experimenter didn't know ALL of the necessary parameters, as compared with the experimenters that published the original article.  Now, what should the PreDoc do?  Should they publish this, so that doubt is created about the original article?  And if they do, if the original authors point out the mistake in the methodology, will the PreDoc ever work in this field?  THIS PROBLEM is one for which there has never been an adequate solution, and, unfortunately, the publication-barrier is NOT the problem.  And "lowering the bar" isn't good because this means that the error in the methodology might not been in the MicroArticle.

The other problem is that NIH will NOT fund replications, so a future grant application that lists the MicroArticle may come under suspicion.

The way that errors are detected is that the CONSEQUENCES of the original finding are NOT realized.  This isn't as good as a  replication, but it is better than nothing.  Alas, many findings don't lead on to other work.  So ..... well I better stop now.


Full credit for the idealism!

Dear Dr. Jewett,


Thank you for your thoughtful comments.  With regards to experimental parameters and methodology, you have pointed out some issues with our initial proposal, which we have modified (see above).  We concede that the proposal is idealistic.  We also realize that change has to begin at the grassroots academic level. MicroPub provides a basic infrastructure that can be presented to policy makers in order to restructure the way biomedical science research is practiced. We believe that the goal of NIH is to fund reproducible research, and ultimately policies have to respond to the fact that implicit trust is now circumvented due to grant shortages.


Also, although NIH does not directly fund replications, replication is often conducted when pursuing novel hypotheses in the form of control experiments. However, these experimental replications languish in lab notebooks and internal meetings. The availability of this additional data will help scientists and the public evaluate the soundness of the research.


All reforms are idealistic, but like open source publications, some of them stick.  It doesn't hurt to try.

Commenting is closed.