Papers "Not Meant To Be Factual"

Unlike statements during political debate, scientific papers present facts. The discussion may include some speculation about the ultimate meaning of those facts, but papers generally tell a story of data and meaning.

Unless someone makes a big mistake or outright lies.

Each day seems to bring to light a new scandal and retraction (the blog Retraction Watch has plenty of material), events that seem to be accelerating over the course of my 20 years in academic medicine.

Retractions in the Medical Literature: Who is responsible for scientific integrity? by R. Grant Steen in the current issue of the American Medical Writers Association Journal caught my eye. The study examined the PubMed database for biomedical research papers retracted from 2000-2010. Almost 5 million publications resulted in 788 retractions over that decade. [Including 88 review articles - how does a review get retracted?]

 

Both the number of articles retracted and the time to retraction increased over the past decade as shown in the graph. The continuous line represents the number of subsequently retracted articles that were published in a given calendar year; more retracted articles were originally published in 2006 than in any other year. The data points in columns represent the number of months between publication and retraction, categorized by year of retraction.

In 2000, 4 articles were retracted and the longest time to retraction was 8 months; in 2004, 49 articles were retracted and the longest time to retraction was 50 months; in 2009, 184 articles were retracted and the longest time to retraction was 117 months. A total of 788 retracted articles are represented as data points in this figure (many points overlap).

So which journals suffered the most retractions? The table shows glamor mags take the prize:

Steen focuses on the role co-authors can play in assuring integrity of the literature. He does address the reasons for the increasing rate of retractions:

One could argue that authors are more dishonest now than in the recent past. This interpretation is consistent with the finding that the number of article retractions has increased significantly in recent years. However, it seems unlikely that a cultural change in the past decade has prompted this increase. Instead, journal editors may have become more aware of misconduct after the publicity about Schön, leading them to set a lower threshold for retraction when an article comes under question. These reasons may also explain why the time to retraction has increased in recent years: Journals are making a more aggressive effort to weed out questionable articles, even if they were published long ago.

One could also argue that the importance of high-impact publications for grant funding and career advancement may make the risk of fabrication or falsification of data more acceptable to researchers.

So ultimately, who is to blame when retraction occurs? Obviously, the authors must bear most of the burden, but Steen argues that the editors of the "repeat offender" journals should also hold responsibility:

Editors are gatekeepers for their journals, and if a journal does not offer a trusted brand, what does it offer? Some scientists have already blamed journal editors for failing to provide a rigorous review for papers before accepting them for publication.

Rigorous peer review may help uncover fraud or fabrication, but, as the editor of Science wrote, "It is asking too much of peer review to expect it to immunize us against clever fraud."

Ultimately, we all must retain a degree of skepticism about anything published in the literature. Even a brilliant series of experiments, performed and published in good faith, can be undone by one negative study with a new technique or tool. Authors, reviewers, and editors all must do their jobs to insure the integrity of the scientific literature.

 

12 responses so far

  • Bashir says:

    With regards to the glamor mags, you could make the argument that people who fabricate data are more likely to target them. Sophisticated fabrications are difficult for a reviewer or editor to catch.

    Though if you continue on the issue of "trusted brands" I'd say the Glamor mag's bigger problem are publications that are non-replicable or exaggerated to the point of meaninglessness.

  • Dr. O says:

    We have a sarcastic saying in our lab when we read a bullsh*t paper "If it's published, it MUST be true." Our little reminder to be cautious of everything we read.

  • drugmonkey says:

    you could make the argument that people who fabricate data are more likely to target them. Sophisticated fabrications are difficult for a reviewer or editor to catch.

    Yes, but not only for the reason you suggest. It is also because GlamourMag culture makes being first to publish something the most important factor. more important than being right or thorough. This leads to shortcuts. The fact that a Science or Nature publication requires many disparate technical domains to be shoehorned together means the scrutiny of any individual part is lesser. Not harder to catch, if there were subdomain reviewers, just harder to catch for a limited subset of reviewers. Perhaps the solution would be to recruit three minimum reviewers for each identifiably distinct technique or research tradition. Set a bunny hopper to catch a bunny hopper....

    There are structural reasons. I read many sub-field journals of modest impact in which you would never see a data point without an error bar and a statistical test of the interpretation of an effect. Well, hardly ever. This is routine in GlamourPubs, from what I read. The short format (and aforementioned shoehorning) means you are left with a lot of data-not-shown and supplementary figures which fall far short of the beauty of the "representative" image in the main article.

    Also, there is the fact that the labs which publish repeatedly in GlamourMags are different from other laboratories, in many, many cultural ways. Size, degree of PI oversight, and "we're gonna get scooped, we gotta get this done now" manipulation of the lab members.

  • Harriet says:

    Sad to say, my specialty, anesthesiology, has contributed a lot of these articles in the past few years-- two highly published, I hesitate to call them, investigators who turned out not to have actually enrolled subjects in the high impact clinical trials they published. Very painful. And not entirely clear to me how to prevent it without putting enormous burdens on all authors. Although one would hope co-authors could be a screen...

  • This whole analysis should be repeated using the percentage of retracted papers relative to the total number of papers published that year (or in that journal). The answer could be very different..

  • This is a correlation with a common cause. The faked up data in shitteasse journals is never caught, because the science is trivial and boring and no one reads the shitte or cares about it. The faked up data in quality journals gets caught, because the science is important and tons of people read it and care about it.

  • DrugMonkey says:

    Except you have no evidence for that nor any rationale for anyone to do this PP. Unlike the clear motivations and personal evidence which backs up the data Pascale posted. Motivations and contingencies actually matter in human behavior.

  • WhizBANG! says:

    Some interesting science initially get published in lower impact journals because reviewers are initially skeptical of the findings. The initial reports of atrial natriuretic factor (now peptide), for example.

    Just because something doesn't make Science, Nature, PNAS, or the like doesn't mean it is shitty science.

  • Vicki says:

    Comrade PP: Not always that simple or straightforward, alas. Harriet mentioned high-impact anesthesiology trials that turned out never to have existed. If I'm thinking of the same case, those followed on some with suspiciously good data/results, which weren't challenged until after the nonexistent trials were noticed.

    That fraud wasn't found by someone replicating the trials before making medical decisions based on them. It was found by a bureaucrat who noticed that the researcher hadn't filed the appropriate human subjects documents. As it turned out, there were no consents or other documents because there were no subjects. That an on-the-ball administrator noticed this has nothing to do with how high profile the research was.

  • Deena says:

    At last! Somehtnig clear I can understand. Thanks!

Leave a Reply