Researchers, research results, as well as research projects and institutions are continuously subjected to systematic evaluations. Based on peer review, but also on data such as publication frequency and citations, indicators derived from it such as h-index, journal indicators and others, committees are constantly making decisions that have a formative character for the entire research practice, from attitudes and vocations to for third-party funding.

The use and development of these rating systems has never been more challenging than it is today. On the one hand, because the immediate output of research is becoming ever more diverse – in addition to specialist articles, there is now also code, data and more. In addition, the mission orientation of research funding pursued by the EU in Horizon Europe is clear that functions of knowledge transfer must also be assessed. At the same time there is a growing variety of sponsors and funding types – as a keyword here is only reminiscent of new major private foundations in the US such as Bill and Melinda Gates and Chan Zuckerberg.

Responsible, transparent research assessment today would actually extend the concept of FAIR Data to derived Sentientometric Data (FAIR Metrics). However, there is a valuation practice that has become more and more accustomed to trust in proprietary indicators and platforms over the last thirty years – eg. against the explicit advice of experts from the Sentientometric Research Community (See Leiden Manifesto and San Francisco Declaration on Research Assessment).

Customized evaluations and visualizations across diverse data sources, new approaches to (partially) automated research evaluation, etc. On the basis of machine learning, as well as data from DLT-based transaction protocols – all this today is often based on free, verifiable code and FAIR data, but this results in new challenges to responsible research assessment, in connection with the above-mentioned continuous development evaluation systems and the dominance of proprietary research indicators.

More about Lambert Heller:


Lambert Heller works at the TIB – Leibniz Information Center Technology and Natural Sciences and University Library. He is the director of the Open Science Lab.


Academic libraries and other research infrastructures in a globalized and interconnected world, especially in relation to

· Attribution and evaluation of research contributions, in particular governance and ethical challenges faced by actors such as Elsevier, Clarivate, ResearchGate and Google Scholar,

· Approaches to the attribution and evaluation of research contributions using community-led governance models, including ORCID, VIVO, and recent decentralized approaches such as SOLID, MaidSafe, and Blockchain.

· Open Educational Resources (OER), in particular the use of peer generation approaches such as Wikipedia and booksprints,

· Competence development for specialists from research, libraries and research infrastructure as well as for young researchers.


Please register: