Valorizing SSH research: Towards a new approach to evaluate SSH research'value for society

I Galleron, M Ochsner, J Spaapen… - fteval Journal for …, 2017 - repository.fteval.at
fteval Journal for Research and Technology Policy Evaluation, 2017repository.fteval.at
4. Evaluation can be linked to funding or serve formative reasons (Coryn, Hattie, Scriven &
Hartmann, 2007; Geuna & Martin, 2001; 2003; von Tunzelmann & Mbula, 2003). Obviously,
there is also the possibility that the evaluation outcome is not officially linked to funding but is
nevertheless used for funding purposes by other institutions, or inside the evaluated
institution. 5. Different methods can underlie the evaluation procedure (Coryn et al., 2007;
Geuna & Martin, 2003; Hicks, 2010; 2012; von Tunzelmann & Mbula, 2003). This dimension …
4. Evaluation can be linked to funding or serve formative reasons (Coryn, Hattie, Scriven & Hartmann, 2007; Geuna & Martin, 2001; 2003; von Tunzelmann & Mbula, 2003). Obviously, there is also the possibility that the evaluation outcome is not officially linked to funding but is nevertheless used for funding purposes by other institutions, or inside the evaluated institution. 5. Different methods can underlie the evaluation procedure (Coryn et al., 2007; Geuna & Martin, 2003; Hicks, 2010; 2012; von Tunzelmann & Mbula, 2003). This dimension has the following aspects: a) the principal method; b) whether and what kind of data is used; and c) criteria that are used if peers are involved. 6. Evaluations involve a time dimension. Two aspects are linked to time: a) evaluations can be repeated, thus the time of an evaluation cycle (in other words whether it is consistent and systematic) is a first aspect (Coryn et al., 2007; Hicks, 2010; 2012); and b) evaluations look back at a certain time window, which constitutes a second aspect (Hicks, 2010; 2012). 7. Transparency is an important dimension regarding dissemination and the use of the evaluation (Dahler-Larsen, 2012; Hammarfelt, Nelhans, Eklund & Astrom, 2016; Hicks, 2010). This is closely linked to the method applied and whether there is a link to funding. As results of evaluations can be seen as indicators of quality themselves, transparency and a reflective dissemination are crucial. Evaluations engage therefore an ethical responsibility (see Hicks et al., 2015; Klein, 2008); it is also a requirement for the construction of indicators in general (see the OECD Handbook of Composite Indicators, Nardo, Saisana, Saltelli & Tarantola, 2005), as well as in program evaluation (Morris, 2015). However, while Hicks mentions that “Most systems emphasise transparency of methods and data “(Hicks, 2010, p. 39), she does not use transparency for the typology. In our case, we use three aspects of transparency: a) the methods for calculating the final scores, when these are an outcome of the evaluation; b) the methods for linking scores to funding, if funding is linked to evaluation; and c) the publication of the results. 8. Evaluations come with a cost in both time and money. Hicks emphasises the need to include the cost of evaluation in a typology, but states that “cost is rarely discussed”(Hicks, 2010, p. 34). She observes also that assessing “costs and benefits […] is impossible”(Hicks, 2012). Geuna and Martin (2003) also raise the question of the cost/benefit ratio of performance-based research assessments. However, they did not investigate whether this was a topic in the countries. Rather, they argue that such systems, in general, will not have a positive cost/benefit ratio in the long run as the procedures become more and more complex and the returns on investment diminish as more countries apply the same procedures. We included two aspects regarding the costs in our typology: a) whether (estimated) costs are made public and b) whether there are efforts to estimate cost/benefit ratios.
To create a typology along which the countries can be classified according to their evaluation systems, a Delphi-like approach was adopted (for the use of the Delphi method to create a typology of evaluation systems, see Coryn et al., 2007; for a Delphi-method in the context of SSH research evaluation, see Hug, Ochsner & Daniel, 2014). The procedure consists of five steps. In a first step, a provisional typology was developed by the members of the Steering Committee and selected specialists from the Management Committee of the Action. In a second step, a survey based on this typology was administered to the specialists of the COST Action. The …
repository.fteval.at
以上显示的是最相近的搜索结果。 查看全部搜索结果