The Research Excellence Framework (REF): revisiting the RAE. The value of bibliometrics |
In January 2008, Research Trends brought you a detailed overview of the UK’s Research Assessment Exercise (RAE) — then entering its final iteration — along with a look ahead at the RAE’s successor, the Research Excellence Framework (REF). To supplement this, Henk Moed described the behavior-changing effect research evaluation — and by extension any bibliometric indicators used in such evaluation — can have on institutions; and Bahram Bekhradnia provided a cautionary take on emphasizing bibliometrics in the REF2–3. In the three years since, the Higher Education Funding Council for England (HEFCE) has carried out consultations and pilot exercises, including one focusing on the use and value of bibliometrics. So halfway to its expected completion in 2014, how does the REF look? A firm presumption Bibliometric indicators were intended to play a large part in the REF; in fact, in the early stages of its development, “the Government [had] a firm presumption that after the 2008 RAE the system for assessing research quality … [would] be mainly metrics-based.” The suggested reason: to reduce “some of the burdens imposed on universities”. In 2009 HEFCE conducted a pilot exercise of bibliometrics for the REF, which showed “general but cautious support” for using citation data to complement — but not replace — peer review. Ironically, one stated concern was “the cost and burden involved”; but unease extended to the value such information provides5. Confronted with such concerns, HEFCE concluded that citation information should inform expert review, rather than act as a primary indicator of quality; and further that the use of citation data should be an option available to sub-panels, rather than an imposed requirement. As Bahram Bekhradnia stated in his 2009 critique of the REF: “The process now proposed is radically different [from the initial, metrics-based proposals], and will recognizably be a development of the previous Research Assessment Exercises.”6 The recession of metrics The widespread economic recession of the past few years has affected governmental policies in practically every area, and the REF has been no exception. With purse strings tightening, and financial concerns at the heart of current political debate, people naturally asked whether university research should be more accountable to the economy. And so a new component of the REF was placed alongside research environment and quality: impact. The impact metric covers the economic and social impact of research, and accounts for 25% of the overall score in the REF. The inclusion of this measure rapidly shifted the focus of discussions about the REF onto impact, with some academics speaking out about the incompatibility between such a metric and curiosity-driven research7. This shift in attention led bibliometrics to fade from view, which was compounded when David Willetts, Minister for Universities and Science, announced a year-long delay to the REF to discuss comprehensively the impact component, and a pilot exercise was conducted to develop a practical method of assessing impact8–9. Building the Framework Over the last few months, chairs have been appointed to the main and sub-panels of the REF, and these will likely have more importance than under the original, metrics-based plans. The next step will be the appointment of panel members this year, and more detailed guidance on submissions and assessment criteria10; alongside this, HEFCE recently put out a call for tenders for provision of bibliometric data. Under its current timetable, the REF will inform funding from 2015 onwards; and with budget cuts across the state affecting higher education institutions, there will be pressure on HEFCE to get it right first time with the new system. One question that deserves renewed attention, after the lurch of focus to impact, is whether bibliometrics should have a greater role in the assessment of research. The Framework has returned to its RAE roots with a confidence in expert review to the near-exclusion of statistics; but as one report following the bibliometrics pilot says, “[t]he trust in the infallibility of peer review is striking, and feels rather contrived … in light of the numerous studies carried out over the years that note it has limitations as well as great strengths”. Bibliometric indicators have limitations, and caveats that must be applied to conclusions drawn from their use; but where peer review is deemed critical, “it seems incontrovertible that such judgments will be more robust for having considered multiple streams of data and intelligence, both subjective and objective.”11 Assessing research around the world • In Australia, the Research Quality Framework (RQF) was replaced by the Excellence in Research for Australia (ERA) initiative. The first full evaluation under this system was conducted in 2010, with another round to follow in 2012. • Funding for Flemish institutions in Belgium is determined in part by the so-called BOF-key, funding parameter which since 2003 incorporates bibliometric indicators12. • Set up in 2007, AERES evaluates France’s higher education institutes. The agency carries out on-site inspections, and its findings are used to direct funding. • Italy’s Valutazione Triennale della Ricerca (VTR) ran in 2003; six years later, outcomes of the assessment were used for the first time to allocate funds. The system was fully based on peer review13. Its successor exercise, the Valutazione Quinquennale della Ricerca (VQR), brings citation analysis alongside peer review as an option for assessment. • The Performance-Based Research Fund (PBRF) is New Zealand’s tertiary education funding model, of which 60% is determined by Quality Evaluation of research. This is assessed by expert peer review; the next assessment will take place in 2012. (Fonte: M. Richardson, Research Trends marzo 2011) |