An international team repeated 100 different experiments published in popular psychology journals and discovered that only 36% of those studies could be replicated. The new study, which was published in the journal Science, saw a total of 270 scientists repeat the experiments in five continents. The effort was launched by US psychologists to address the growing concerns over reliability of psychology studies.

Professor of psychology and the leader of the study Brian Nosek of the University of Virginia said that he would have definitely been much happier if the effects turned out to be more reproducible. He added that the results have left him disappointed and he feels that scientists can surely do a much better job.

According to Nosek, the study shows that readers should start believing that a particular study will not be the last word. He said that science will always be a way of reducing uncertainty, and it’s not possible for a particular study to act as a definitive result on its own.

Each of the 100 experiments repeated by the scientists was published in top psychology journals in2008. These experiments can broadly be divided into two categories, social psychology experiments and cognitive psychology experiments.

The term cognitive psychology is used for basic operations of our mind; studies on this subject mostly look at areas like memory, attention and perception. Social psychology, on the other hand, looks primarily at more social issues like identity, self-esteem, the way a person interacts, prejudice, etc.


The investigation revealed that as much as 75% of the social psychology experiments couldn’t be replicated. This means the original findings disappeared the moment those experiments were conducted by a different group of scientists. Out of the cognitive psychology studies repeated by the scientists, 50% turned out to be non-replicable.

Must Read: Almost 50 percentage of psychology studies published failed reproducibility test

What’s more, even when original findings of a study could be replicated by scientists, the size of its effects on an average was found to be half as big as what was reported by the first study. That might have happened due to scientists, not including data undermining their hypotheses or journals accepting just the most powerful claims for publication.

Nosek, however, said that results of the study under him have provided scientists an opportunity of understanding and fixing the problem.

SOURCEScience Magazine