Clarivate Plc (NYSE:CLVT), a global leader in providing trusted information and insights to accelerate the pace of innovation, today released an in-depth look at formal regional research assessment, co-authored by the Institute for Scientific Information at Clarivate, together with esteemed industry partners: Kate Williams, University of Melbourne; Jonathan Grant, Different Angles; Lutz Bornmann, Max Planck Institute and Martin Szomszor, Electric Data Solutions.
Research assessment: Origins, evolution, outcomes examines the origins of research assessment, and how it works in different regions via the approaches of Australia, Canada, Germany, Hong Kong, New Zealand and the United Kingdom. It also considers the future of research assessment exercises and examines the potential of Artificial Intelligence (AI) to replace traditional peer review.
Despite the differences in their approaches to research assessment, variation in their links to funding incentives and disparity in timing between similar systems, all the regions examined improved in comparative research performance, as measured by bibliometric performance. There is, however, no clear universal verdict on whether research assessment is a necessary or facilitating agent.
Jonathan Adams, Chief Scientist at the Institute for Scientific Information at Clarivate explains: “Research assessment has had major effects on institutional structures. It has unquestionably had pervasive effects on researcher behavior: demonstrable in the U.K. and widely reported elsewhere. The most important feature of any assessment system should arguably be the extent to which it attracts and retains the confidence of the researchers.”
The Global Research Report, “Research assessment: Origins, evolution, outcomes” finds that:
- Australia has a comprehensive research assessment, seeking to measure both academic impact and wider societal benefit. Australian methodology distinguishes engagement from impact, in contrast to other research impact evaluations throughout the world such as the United Kingdom’s REF, but it does not influence direct research funding and may be unconnected to citation-indexed research performance. (Kate Williams, University of Melbourne).
- Canada has a long history and culture of integrating knowledge mobilization and evaluation across the research life cycle and focuses on “knowledge mobilization” in specific research areas rather than assessing general research outcomes. (Jonathan Grant, Different Angles).
- Germany has promoted its research status using ‘Excellence Initiative’ block funding to research organizations without regular nationwide evaluations. (Lutz Bornmann, Max Planck Institute).
- While Hong Kong’s research assessment system is similar to the U.K. model, it draws on a distinctive conception of scholarship and on socio-economic benefit as well as excellence.
- The introduction of New Zealand’s Performance-Based Research Fund can be associated with a marked improvement in its internationally comparative research performance.
- The United Kingdom set the first model for regular research assessment, which has had pervasive effects on institutional management and on researcher behavior.
There have always been demands for technical solutions to reduce perceived assessment bureaucracy and the report acknowledges that Artificial Intelligence has a profound impact on research but machine learning solutions to assessment burdens may propagate existing biases. Models of assessment outcomes reveal that apparently important predictors may link to factors unrelated to research impact.
Martin Szomszor, Founder Electric Data Solutions: “What this debate has made clear is that both the research system and the data we collect about it capture many forms of prejudice relating to gender, ethnicity, nationality, sexuality, age and more. Without proper consideration of these, machine learning solutions will only propagate these existing biases. This is a problem that is already familiar to those who make use of bibliometric indicators and an issue that has been at the forefront of the responsible metrics agenda.”
Jonathan Adams concludes, “Our report demonstrates there are many challenges, common to many regions. Research is a very long game, so assessment stability has great merit and, whatever the criticisms, the RAE/REF remains much as it did thirty years ago, with impact case studies added on.”