| 12 Février 2013
“We  argue that new citation impact indicators are needed and that these  indicators should allow the comparison of the observed impact for a  given publication set with a reference set of similar publications,”  said Lutz Bornmann, a sociologist of science at the Division for Science  and Innovation Studies at the Max Planck Society in Munich, Germany.  “This is a better way to make meaningful assessments of scientific  work.” He added: “Indicators must also take into account that the  distribution of citations across papers is often skewed. The use of  percentiles described in our paper provides a solution.” A  scientific paper and its citation in other papers reveal how science  makes progress and can help to establish the credit for new discoveries.  Citation indicators are used extensively to evaluate the impact of the  research of individual scientists, institutes and university  departments, and sometimes even countries. The outcomes of such analyses  can have a profound influence on the careers of scientists or, for  example, the future funding of research institutes. The  authors of the study emphasize that journal impact factors and h-index,  two frequently used parameters to evaluate the quality and quantity of  science, do not readily allow for comparisons between the work of  different scientists or publications in journals from unrelated research  fields. The problem is similar to comparing the performance of  individuals who score goals in sports as different as football and  handball. The number of goals in football is often around 2 but in  handball can reach 20 or more. Is someone who scores an average of five  goals in handball a better athlete than a football player who scores one  goal a game? “Experts  of bibliometrics avoid using journal impact factors and the h-index  because these indicators do not provide normalized values. For many  years, they have been using reference sets to normalize the number of  citations which is indispensable for fair assessment,” commented Werner  Marx, co-author of the paper and head of the Information Retrieval  Services at the Max Planck Institute for Solid State Research in  Stuttgart, Germany. “The use of this type of advanced analysis is  growing. However, we have an opportunity to do more and this could have a  big impact on the way we answer important questions such as how good  research really is.” How  good is research really? Measuring the citation impact of publications  with percentiles increases correct assessments and fair comparisons Lutz Bornmann, Werner Marx Read the paper: doi: 10.1038/embor.2013.9 http://www.nature.com/embor/journal/vaop/ncurrent/full/embor20139a.html
HEIDELBERG, 12 February 2013 – How do you compare the impact of a researcher in chemistry or physics  with a molecular biologist who may be working on similar projects? In an  article published today in EMBO reports two experts support the use of citation indicators that are based on  percentiles, a statistical parameter that allows for comparisons with a  carefully defined group of reference data. Journal impact factors and  h-index alone do not make the grade.