NEW YORK • The past several years have been bruising ones for the credibility of the social sciences. A star social psychologist was caught fabricating data, leading to more than 50 retracted papers. A top journal published a study supporting the existence of ESP that was widely criticised. The journal Science pulled a political science paper on the effect of gay canvassers on voters' behaviour because of concerns about faked data.
Now, a years-long effort to reproduce 100 studies published in three leading psychology journals has found that more than half of the findings did not hold up when retested. The analysis was done by research psychologists, many of whom volunteered their time to double-check what they considered important work. Their conclusions, reported on Thursday in the journal Science, have confirmed the worst fears of scientists who have long worried that the field needed a strong correction.
The vetted studies were considered part of the core knowledge by which scientists understand the dynamics of personality, relationships, learning and memory. Therapists and educators rely on such findings to help guide decisions, and the fact that so many of the studies were called into question could sow doubt in the scientific underpinnings of their work.
More than 60 of the studies did not hold up. Among them was one on free will. It found that participants who read a passage arguing that their behaviour is predetermined were more likely than those who had not read the passage to cheat on a subsequent test.
Another was on the effect of physical distance on emotional closeness. Volunteers asked to plot two points that were far apart on graph paper later reported weaker emotional attachment to family members, compared with subjects who had graphed points close together.
A third was on mate preference. Attached women were more likely to rate the attractiveness of single men highly when the women were highly fertile, compared with when they were less so.
In the reproduced studies, researchers found weaker effects for all three experiments.
The project began in 2011, when a University of Virginia psychologist decided to find out whether suspect science was a widespread problem. He and his team recruited more than 250 researchers, identified the 100 studies published in 2008, and rigorously redid the experiments in close collaboration with the original authors.
The new analysis - the Reproducibility Project - found no evidence of fraud or that any original study was definitively false. Rather, it concluded that the evidence for most published findings was not nearly as strong as first claimed.
Dr John Ioannidis, a director of Stanford University's Meta-Research Innovation Centre, who once estimated that half of published results across medicine were inflated or wrong, noted the proportion in psychology was even larger than he had thought.
The report appears at a time when the number of retractions of published papers is rising sharply in a wide variety of disciplines. Scientists have pointed to a hypercompetitive culture across science that favours novel, sexy results and provides little incentive for researchers to replicate the findings of others, or for journals to publish studies that fail to find a splashy result.
NEW YORK TIMES