Quick overview:
1. Never use journal-based metrics to evaluate individual outputs.
2. Never mix and match metrics from different providers.
3. Always use qualitative indicators such as peer review in conjunction with metrics.
4. Be aware of the caveats of each metric.
5. Be aware that indices pre-approve journals which covers impressive numbers of but certainly not all journals, and that's not always due to low quality.
When we use metrics, we should:
- Use metrics related to the individual output (article-based metrics e.g. field weighted citation ratio) rather than the venue of publication (journal-based metrics e.g. Journal Impact Factor™, SJR or SNIP) or the author (e.g. h-index).
- Be clear and transparent in the metric methodology we use. If a source does not give information about the origins of the dataset (such as e.g. Google Scholar), it isn't seen as reliable.
- Be explicit about any criteria or metrics being used and make it clear that the content of the paper is more important than where it has been published.
- Use metrics consistently - don't mix and match the same metric from different providers or products in the same statement.
For example: don't use article metrics from Scopus for one set of researchers and article metrics from Web of Science for another set of researchers. Why? Because the providers might well use different data sources to reach their numbers, so it would be comparing giraffes to penguins: both might be animals (metrics) but why they have the length of neck (citation count) they do has very different evolutionary (data source) reasons, and just comparing their necks out of context will tell you nothing but the fact that one is shorter than the other.
- Compare Like with Like - an early career researcher's output profile will not be the same as that of an established professor, so raw citation numbers are not comparable.
For example: the h-index does not compare like-for-like as it favours researchers who have been working in their field for a long time with no career breaks.
Imagine evaluating football players solely based on the number of goals scored and the number of matches played. This assessment, akin to the h-index, looks at a player's impact by considering their scoring record (representing publications) and the number of matches played (indicating citations).
While this approach might give a broad indication of a player's contribution to the team's success, it overlooks crucial aspects of their skills, teamwork, and versatility on the field, as well as career. Just as a player might have a high goal count but lack defensive skills or teamwork, the h-index might highlight prolific publishing without reflecting the overall influence, diversity, or quality of a researcher's contributions within their field, and will disadvantage young players or those who had to take time healing from injuries.
- Consider the value and impact of all research outputs, such as datasets, software, exhibtions, etc., rather than focussing solely on research publications, and consider a broad range of impact, such as influencing policy and other alternative metrics.
Which metrics should I use and why?
For an idea of the pros and cons of each metric, please visit the Guide to Metrics page.
Back to: Open Research