written by Michael Whitton
Most researchers will want to publish their research in a place where it will have impact – i.e. it will be read, and other researchers will build on and cite (reference) it in their papers. There is a role for metrics, rankings and expert judgement in finding the best venue – the value of these types of information will depend on your discipline and other factors. E.g. metrics is likely to be of lower value in the arts and social sciences.
The expert judgement of yourself and your colleagues is important, for example to assess how well your paper fits within the scope etc. of a journal, and how likely it would be accepted for publication. Also your more experienced colleagues will have more understanding of what editors are looking for, and the benefits of different venues to disseminate your work and build your network. Also there can be factors that would make a journal or other publication venue of high value that metrics wouldn’t pick up. E.g. in some areas publishing where practitioners are likely to read your paper can be important; even if they are unlikely to cite your work.
Metrics supplement expert judgement
Metrics can be useful to give an objective lens on the situation; to supplement but not replace expert judgement. They can highlight new journals to consider, that you may have been unaware of. Metrics can also highlight trends over time – is a title improving or declining? Tools like Scopus Sources and Journal Citation Reports do allow you to show only (fully) Open Access Journals, but will not include some options to make a paper open via a transformative agreement [https://library.soton.ac.uk/openaccess/agreements]. In addition to complying with funder policies, making your paper open can improve how well it is cited.
Metrics like the SJR, SNIP and Article Influence Score offer a more complete picture as they are normalised, which allows for the average citations for a paper being higher in some disciplines than others.
Example of normalised metrics
If we search for both Dermatology and Immunology and Allergy journals in Scopus sources and ranked by CiteScore, the first 23 titles would all be from the higher citing discipline Immunology and Allergy. Ranking by SNIP raises the best Dermatology title from 24th place to 10th. So using the SNIP would make us more likely to identify good titles from both fields if we were researching something interdisciplinary between them.
What are the metrics assessing?
Note that we are using these metrics to only assess the journal to find the best venue, we are not using them to assess individual articles or people (which would be poor practice and against the University of Southampton Responsible Research Metrics Policy). See our online training course [https://library.soton.ac.uk/bibliometrics/training] and guidance on how to use metrics responsibly [https://library.soton.ac.uk/bibliometrics/responsible].
You may also come across journal rankings, e.g. for a specific department or for a discipline (e.g. ABS in Business). These can combine expert judgement with some kind of metric to produce ratings, or a list of the ‘best’ journals. Such lists can omit or underestimate titles that interdisciplinary research would be best published in. Also elements of the ranking that rely on judgement may introduce biases against certain kind of research/journal.
There isn’t really any truly unbiased information in either expert judgement or metrics. Expert judgement will be biased to a greater or lesser extent – e.g. people can have unconscious bias against some areas of research, publishers or models of publishing; and metrics reflect the biases and inequalities in our research communities. By looking across multiple metrics and including a more diverse pool of experts whose judgements you respect – you will get more a balanced view from which to make more informed decisions.