Dal “Report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS)” si riporta questa parte finale del sommario.
The validity of statistics such as the impact factor and h‐index is neither well understood nor well studied. The connection of these statistics with research quality is sometimes established on the basis of “experience.” The justification for relying on them is that they are “readily available.” The few studies of these statistics that were done focused narrowly on showing a correlation with some other measure of quality rather than on determining how one can best derive useful information from citation data.
We do not dismiss citation statistics as a tool for assessing the quality of research—citation data and statistics can provide some valuable information. We recognize that assessment must be practical, and for this reason easily‐derived citation statistics almost surely will be part of the process. But citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused. Research is too important to measure its value with only a single coarse tool.
We hope those involved in assessment will read both the commentary and the details of this report in order to understand not only the limitations of citation statistics but also how better to use them. If we set high standards for the conduct of science, surely we should set equally high standards for assessing its quality.
(Fonte: R. Adler, J. Ewing, P. Taylor, testo completo su mathunion.org, segnalato da roars.it)