So how might journals do things better? As Daniël Lakens, of Eindhoven University of Technology in the Netherlands, and his colleagues have argued, researchers should embrace a “Red Team challenge” approach to peer review. Just as software companies hire hackers to probe their products for potential gaps in the security, a journal might recruit a team of scientific devil’s advocates: subject-matter specialists and methodologists who will look for “holes and errors in ongoing work and … challenge dominant assumptions, with the goal of improving project quality,” Lakens wrote in Nature recently. After all, he added, science is only as robust as the strongest critique it can handle.
So here’s some advice for scientists and journals: If you’re thinking of publishing a paper on a controversial topic, don’t simply rely on your conventional review process—bring in a Red Team to probe for vulnerabilities. The study-hackers should be experts in the given field, with a stronger-than-usual background in statistics and a nose for identifying potential problems before publication, when they can be addressed. They should be, whenever possible—and, researchers, get ready to clutch your pearls—likely to disagree with your paper’s conclusions. Anticipating the responses of your critics is op-ed writing 101.
Until then, scientists can do what Lakens and his colleagues have done: in May, they launched a red team challenge for a manuscript by a colleague, Nicholas Coles, a social psychologist at Harvard, with each of five scientists given a $200 stipend to hunt for potential problems with the unpublished article, plus an additional $100 for each “critical problem” they uncovered. The project, which wrapped up this month, was meant to serve as a useful case study of the role red teams might play in science.”
The five critics came back with 107 potential errors, of which 18 were judged (by a neutral arbiter) to be significant. Of those, Lakens says, five were major problems, including “two previously unknown limitations of a key manipulation, inadequacies in the design and description of the power analysis, an incorrectly reported statistical test in the supplemental materials, and a lack of information about the sample in the manuscript.” Problems, in other words, that would have been deeply troubling had they surfaced after publication.
In light of the comments, Coles has decided to shelve the paper for the moment. “Instead of putting the final touches on my submission cover letter, I am back at the drawing board—fixing the fixable, designing a follow-up study to address the unfixable, and considering what role Red Teams can play in science more broadly,” he wrote recently.
Lakens says he’s planning to employ a Red Team to vet his own meta-analysis (a study of studies) on the topic of gender discrimination. It’s with controversial topics, in particular, that he sees the approach as being most useful for journals and researchers. “You would not insure a trip to the grocery store tomorrow, but you would consider travel insurance for a round the world trip,” he said. “It is all about the cost-benefit analysis for us as well. I leave it to others to decide whose research is important enough for a Red Team.”
That’s a critical point. Even before the murder of George Floyd, it was entirely predictable that a study of whether police officers kill Blacks more often than whites was likely to garner a lot of scrutiny. Given that resources are always scarce, it makes more sense to deploy the most comprehensive, time-consuming forms of peer review in cases where the findings matter most.
Researchers joke about the hated Reviewer #2 (or #3, depending on your meme); the one who’s always asking for more experiments, recommending vast revisions, and in general holding up your progress, whether toward publishing a paper or getting tenure. Without a doubt, there are jerks in science, and not all critiques are well-intentioned. But if we strip away the nastiness of Reviewer #2s, and the notion that their quibbles amount to spiteful sabotage, they start to look a bit like Red-Team leaders. Their more vigorous approach to doing peer review could help clean up the scientific record by making sure fewer incorrect conclusions are published. Isn’t that worth the effort?
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at email@example.com.
More Great WIRED Stories