Medical Blogs

March 7, 2007

Geting The Facts Straight In Statistical Psychological Research

New research published in the March issue of Psychological Science suggests that efforts to advocate improved statistical practices in psychological research may be paying off.

Geoff Cumming, Fiona Fidler and colleagues at La Trobe University in Melbourne, Australia sought to examine whether guidelines set forth in 1999 by the American Psychological Association's Task Force on Statistical Inference (TSFI) had been implemented in psychological research.

The authors analyzed articles from 10 leading international psychology journals from 1998 to 2006, focusing on three practices central to the statistical reform debate: Null hypothesis significance testing, confidence intervals and figures with error bars. The results demonstrate that psychologists still rely on traditional null hypothesis significance testing but are also using considerably more graphs with error bars to report their research.

"For more than 50 years, statistical significance testing has been psychologists' main statistical method" Fidler explains, "but there's evidence it's widely misunderstood and leads to dreadful research decisions."

According to the authors, the shift towards using graphs with error bars signals a step forward in data interpretation. "Error bars," says Fidler, "can give a clear impression of a study's precision and lead to better conclusions."

The consensus of many academic psychologists and editors of psychology journals echoed the findings of the study. Results from a survey sent by the authors indicated that statistical reform was necessary in the field. But in spite of the apparent readiness to contemplate change, reform was not regarded as a priority, with a few editors noting resistance to change from some article authors.

Cumming and Fidler insist that changes in statistical practices in psychological research are needed for researchers, and readers of journal articles to have a more accurate understanding of experimental results. They strongly recommend that scientific psychology "change its emphasis from the dichotomous decision making of null hypothesis significance testing to estimation of effect sizes."

"To achieve this goal" say the authors, "researchers need further detailed guidance, examples of good practice, and editorial or institutional leadership."

###

Psychological Science, published by the Association for Psychological Science, is ranked among the top 10 general psychology journals for impact by the Institute for Scientific Information.

Contact: Geoff Cumming
Association for Psychological Science

No comments: