quarta-feira, 15 de outubro de 2008

The Misused Impact Factor - Science Editorial

Science 10 October 2008:
Vol. 322, no. 5899, p. 165
DOI: 10.1126/science.1165316

Editorial:
The Misused Impact Factor


by Kai Simons
.
Research papers from all over the world are published in thousands of Science journals every year. The quality of these papers clearly has to be evaluated, not only to determine their accuracy and contribution to fields of research, but also to help make informed decisions about rewarding scientists with funding and appointments to research positions. One measure often used to determine the quality of a paper is the so-called "impact factor" of the journal in which it was published. This citation-based metric is meant to rank scientific journals, but there have been numerous criticisms over the years of its use as a measure of the quality of individual research papers. Still, this misuse persists. Why?
.
The annual release of newly calculated impact factors has become a big event. Each year, Thomson Reuters extracts the references from more than 9000 journals and calculates the impact factor for each journal by taking the number of citations to articles published by the journal in the previous 2 years and dividing this by the number of articles published by the journal during those same years. The top-ranked journals in biology, for example, have impact factors of 35 to 40 citations per article. Publishers and editors celebrate any increase, whereas a decrease can send them into a huddle to figure out ways to boost their ranking.
.
This algorithm is not a simple measure of quality, and a major criticism is that the calculation can be manipulated by journals. For example, review articles are more frequently cited than primary research papers, so reviews increase a journal's impact factor. In many journals, the number of reviews has therefore increased dramatically, and in new trendy areas, the number of reviews sometimes approaches that of primary research papers in the field. Many journals now publish commentary-type articles, which are also counted in the numerator. Amazingly, the calculation also includes citations to retracted papers, not to mention articles containing falsified data (not yet retracted) that continue to be cited. The denominator, on the other hand, includes only primary research papers and reviews.
.
Why does impact factor matter so much to the scientific community, further inflating its importance? Unfortunately, these numbers are increasingly used to assess individual papers, scientists, and institutions. Thus, governments are using bibliometrics based on journal impact factors to rank universities and research institutions. Hiring, faculty-promoting, and grant-awarding committees can use a journal's impact factor as a convenient shortcut to rate a paper without reading it. Such practices compel scientists to submit their papers to journals at the top of the impact factor ladder, circulating progressively through journals further down the rungs when they are rejected. This not only wastes time for editors and those who peer-review the papers, but it is discouraging for scientists, regardless of the stage of their career.
.
Fortunately, some new practices are being attempted. The Howard Hughes Medical Institute is now innovating their evaluating practices by considering only a subset of publications chosen by a scientist for the review board to evaluate carefully. More institutions should determine quality in this manner.
.
At the same time, some publishers are exploring new practices. For instance, PLoS One, one of the journals published by the Public Library of Science, evaluates papers only for technical accuracy and not subjectively for their potential impact on a field. The European Molecular Biology Organization is also rethinking its publication activities, with the goal of providing a means to publish peer-reviewed scientific data without the demotivating practices that scientists often encounter today.
.
There are no numerical shortcuts for evaluating research quality. What counts is the quality of a scientist's work wherever it is published. That quality is ultimately judged by scientists, raising the issue of the process by which scientists review each others' research. However, unless publishers, scientists, and institutions make serious efforts to change how the impact of each individual scientist's work is determined, the scientific community will be doomed to live by the numerically driven motto, "survival by your impact factors."
.
====================================
Kai Simons is president of the European Life Scientist Organization and is at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany.

Nenhum comentário: