Evaluations & Metrics
When evaluating the quality and impact of research, quantitative, or so called bibliometric, indicators are applied. There are different methods of measuring research publications, but they are mainly based on published research article citations. The indicators are used to make comparisons, when appointing someone, or as a basis of allocation of funding.
In some disciplines, mainly science, technology and medicine, data from one or two citation databases tend to be used. The more publications that cite a given publication, the higher the score, or ranking, will be for that publication. In the humanities, the publishing traditions are usually different, which affects the possible ways in which research can be measured.
An individual author or journal can be assigned an H-index score. The H-index score is calculated based on relevant data in each separate database, which means that a researcher’s or journal’s H-index score will vary depending on what data are available in the respective database.
A researcher’s H-index is n if he/she has written at least n publications that have been cited at least n times each. For example, a researcher who has published 24 articles, of which 10 have been cited 10 times or more, has an H-index score of 10.
Clarivate Analytics, the owners of Web of Science, publishes Journal Citation Reports (JCR), which can be used to check a journal’s Impact Factor. Impact Factor is a measurement of the average number of times a journal’s articles have been cited during a certain period of time. Impact Factor is used exclusively for journals indexed in Web of Science. A certain journal’s Impact Factor is based on the number of citing articles indexed in Web of Science.
Elsevier has developed a couple of indicators, Source Normalized Impact per Paper (SNIP) and SCImago Journal Rank (SJR), for their database Scopus. These can also be used for analyses of data in other databases than Scopus. However, in order for these indicators to be of use, a significant part of a discipline’s publications must be indexed in the same database. You can compare journals in Scopus under Compare Sources.
Google Scholar offers statistics of the number of times an author or journal has been cited. However, the quality of the citations data may vary.
The Norwegian Centre for Research Data (NSD) operates the Norwegian Register for Scientific Journals, Series and Publishers in cooperation with The National Board of Scholarly Publishing (NPU) on behalf of the Norwegian Ministry of Education and Research: "NSD has operational responsibility. NPU has approval authority of journals, series and publishers. The register shows which scientific publications are recognized in the weighted funding model.”
The BFI-lists (authoritative lists – also called the Danish list) “show the publication channels (Publishers, Journals, Book Series and Conference Series) in which publication of research results are awarded points. There are two lists: the BFI list of publishers and the BFI list of series.”
Evaluating publishing at the Joint Faculties of Humanities and Theology
Data registered in Lund University’s research information system LUCRIS is compiled for evaluation purposes on a yearly basis. On both a local and a national level, efforts are made to develop suitable methods to evaluate publishing in different disciplines. Data from the two large citatation databases, Web of Science and Scopus, cannot be utilised for the humanities and theology due to insufficient representation. In 2007/2008 a Lund University evaluation, RQ08, was carried out.
In 2013/2014 an evaluation, HTRQ14 (link to page with PDF), was carried out at the Joint Faculties of Humanities and Theology.