Written by Michael Whitton and Nicki Clarkson
Citations, journal impact factors, h-indices, even tweets and Facebook likes – there are no end of quantitative measures that can now be used to try to assess the quality and wider impacts of research. But how robust and reliable are such metrics, and what weight – if any – should we give them in the future management of research systems at the national or institutional level?
Research metrics (also called bibliometrics) are quantitative measures that can be used as a way to assess research quality or performance and are often based on how a publication has been cited/referenced. A basket of metrics can be used responsibly to give an indication of the research performance of an individual, group or institution when put together with expert peer assessment.
Why are Research Metrics Important?
Research metrics are used by the University to benchmark our performance against other institutions – based on measures including citations, scholarly output and international collaboration.
Metrics are used by other organisations to assess us (including the QS and Times Higher world university rankings), and some funders (e.g. Research England for some REF Units of Assessment; and the National Institute for Health Research). Understanding this helps us make informed decisions, which can influence key outcomes.
The San Francisco Declaration on Research Assessment (DORA), developed in 2012, recognises the need to improve the ways in which the outputs of scholarly research are evaluated. DORA consists of 18 recommendations with the following key themes:
- eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
- assess research on its own merits rather than on the basis of the journal in which the research is published;
- capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).
Is the Journal Impact Factor (JIF) bad?
Not necessarily, but it is important to recognise that different metrics are appropriate for different situations. JIF is not appropriate for assessing the significance of individual articles and should not be used when recruiting or promoting an individual. However, JIF could be used to help inform an author’s decision to publish in a journal on balance with the aims and scope, editorial board, other works published, readership and rigour of peer-review.
How do we make sure we are comparing apples with apples?
Different disciplines and research areas have different citation patterns – ideally we use metrics that allow or normalise for this (e.g. the Source Normalised Impact per Paper/SNIP; Article Influence Score) or only compare within the same discipline.
The University subscribes to bibliometric tools and databases including SciVal and Web of Science. SciVal provides normalised metrics that reduce the effect of discipline, volume and date. Library staff have been checking and correcting Scopus profiles for our researchers, which are linked to SciVal, to make any analysis more accurate. We also subscribe to Journal Citation Reports which provides bibliometrics for journals indexed in Web of Science. To use these resources responsibly please see the guidance developed by the Library.
Responsible Research Metrics at the University of Southampton
John Holloway, Associate Dean (Research), Faculty of Medicine
I am delighted that the University of Southampton has implemented a new policy on Responsible Research Metrics in line with the principles set out by DORA and the Leiden Manifesto. As a researcher, I know the true value of research is in the discovery of new knowledge and the generation of impact in society. However in my role as Associate Dean (Research) for the Faculty of Medicine, I also understand the need of funders and institutions to use metrics evaluate research to guide funding and strategic investment decisions. However it is vital that these decisions are made using metrics that are reliable and valid and are not applied inappropriately. This policy demonstrates the University’s commitment to these principles.
The University of Southampton Responsible Research Metrics policy was approved by Senate in November 2019. Getting this kind of policy right is important, as decisions taken with support of metrics can affect the future of researchers, faculties and the entire institution.
You can find the policy and associated guidance at http://library.soton.ac.uk/bibliometrics/responsible – this will help you understand the policy, and how to choose appropriate metrics.
Wendy White, Associate Director (Research Engagement)
The Library is pleased to lead the development of this new policy on Responsible Research Metrics, working with colleagues across Faculties and Professional Services. Collaborative initiatives such as this are key to evolving a scholarly environment that both facilitates ground-breaking research and measures research appropriately.