Skip to navigation Skip to content

How to evaluate organisations?

Subscribe to our blog postings by entering your email address:

Many academics review papers and research proposals submitted by their peers. Fewer academics have experience with the evaluation of organisations. This is what I want to reflect on: How do we judge performance, output, quality and impact of an organisation? Who is best placed to evaluate? and how do we organise evaluation processes well?

Evaluation is needed so we can be confident whether or not targets have been achieved, and if these contribute to overall aims. Evaluations can also allow for organisational learning and improvements in service delivery. They may be used as a basis for funding decisions. 

Recently, I was asked to be an expert evaluator for Teagasc (the Irish Food and Agriculture Agency), the national body providing integrated research, advisory and training services to the agriculture and food industry and rural communities. Every year, they evaluate two of their Regional Advisory Offices. The second invitation to evaluate came from IAMO, one of the Leibniz institutes in Germany specialising in research on Agricultural Development in Transformation Economies.

The evaluation processes were similar, in that evaluators were provided with material ahead of a visit, and were asked to respond to evaluation questions and criteria. Both organisations also had a similar time-cycle for evaluation: it happens every 7 years for Leibniz Institutes such as the IAMO, and is intended to happen every 6 years for each of the Teagasc regions.

Evaluations are costly. The organisations spend time collating data, presenting it in figures, text and tables, and preparing posters and presentations for the day. The expert evaluators spend time reading this material and visiting the organisations. Venue, catering and transportation is required. The evaluation visit took 12 hours spread over 2 days for IAMO, and 15 hours over two days for each of the two Teagasc regions. It is therefore important that this investment is worthwhile. During the evaluations I spent some time thinking about issues that might affect the observations and recommendations resulting from an evaluation.

Firstly, I reflected on the background and expertise of the expert evaluators. An interesting difference was the size and composition of the expert group. For Teagasc, the panel consisted of two experts from abroad, a farmer from outside the region, and a Teagasc Business Planning Officer to guide the panel. In contrast, the Leibniz Association had invited a total of 12 panel members. Additional participants were a number of observers including representatives of IAMO’s Scientific Advisory Board, IAMO’s Foundation Board, another Leibniz institute and the Federal Ministry of Agriculture, as well as the two staff from the Leibniz evaluation committee guiding the process. Among the panel of evaluators, the majority had an economics or agricultural economics background. The background and diversity of experts might affect the evaluation process and outcomes. For example, highly specialised experts might be excellent in evaluating the methodological detail but overlook the broader significance of a set of research activities, while specific technical advances might be lost on an expert without in-depth knowledge of that field.

Secondly, I reflected on the degree to which evaluators possessed knowledge of the organisation they were evaluating that would allow them to assess appropriate indicators. Often the experts did not know much about the organisation, apart from what they had been able to read beforehand. Although evaluation questions (e.g. How do you assess the research performance, based on publications?) and criteria (e.g. Relevance and impact) were provided, every expert had to use their own judgement as to appropriate metrics (e.g. what is an adequate number of publications for a department of a certain size, or what is the optimal number of clients per advisor) and other less-quantifiable indicators (e.g. quality of service provision, cooperation or staff management). Personally, it took me a while before I felt like I was in a position to pin down reference points and assess what I was being told by the executive, staff and customers. At IAMO, I found it very useful to be provided with the previous set of evaluation recommendations and trends (within the organisation but also societal). The group dynamics within the expert panel should not be underestimated and thus the role of its guide or chair (e.g. in dealing with dominant and quiet individuals) is crucial for gathering balanced views.

In both cases, the recommendations produced by the expert group were written up by the review organisers. This was good as it reduced the effort required from experts and also ensured the reports complied with standards but this approach could also limit the extent to which the experts engage and feel responsible for the final recommendations. When thinking about this final output I found it important that the report should also contain clear statements of what could not be evaluated (e.g. because lack of data, lack of insights). After all, we should occasionally evaluate our evaluations, even if just informally.

My impression is that the benefit of evaluations emerges not only from the final recommendations made but also through the questions that externals ask, since these prompt reflection within the organisation during the evaluation visit. These questions will reflect not only the background and interests of the experts, but also how (well) the organisation has presented itself to the experts. The experts’ questions and ensuing discussions with staff and management can highlight issues that don’t immediately add up and therefore should be attended to in the future. In order to maximise the value of an evaluation, it is important that such issues don’t get overlooked when evaluations are reported, and that sufficient time is allowed for interaction between evaluators and those being evaluated.

 

Disclaimer: The views expressed in this blog post are the views of the author(s), and not an official position of the institute or funder.

Share

Comments

Post new comment

We moderate comments on our blog posts so there may be a short delay before your comment is posted: whilst we welcome a range of points of view and wish to foster debate, we reserve the right to delete those comments which are abusive, off-topic, or use foul language, or that appear to be spam.
The content of this field is kept private and will not be shown publicly.

Printed from /blog/segs/evaluating_organisations on 20/09/24 06:50:31 AM

The James Hutton Research Institute is the result of the merger in April 2011 of MLURI and SCRI. This merger formed a new powerhouse for research into food, land use, and climate change.