What’s the collective name for a group of evaluators? A measure of evaluators perhaps? Or how about a puzzling of evaluators? A squabbling of evaluators? Hopefully not the latter. I recently met with fellow evaluators working on strategic investments funded by the Big Lottery Fund. The investments vary from the very young (A Better Start) to old (Ageing Better), from specific needs (NEET young people) to multiple needs (homelessness, substance misuse, offending and mental health). However, there is much common ground in relation to evaluation, with those involved seeking to measure the true impact of those investments and find out ‘what works’, for whom and in what circumstances.

Despite the commonality of interests, it’s rare that us evaluation-types get together in this way. Whilst there is a regular gathering of minds for the launch of new findings, it is not that often that we get together when still in the midst of trying to work out how we’ll get those findings! It requires openness amongst people who, on any other day, are competing against each other for funding. Sharing our best ideas and sharing our indecision and uncertainty too.

On this occasion, instead of general descriptions of our work, we chose to focus on two specific issues: setting up a counterfactual group and involving service users in evaluation. In some ways quite markedly different topics, however I think what was common across both was ‘the sell’; the need to sell the benefits of evaluation, either to organisations who aren’t benefitting from any funding (to convince them to be part of the control group) or to engage service users in the evaluation.

One of the biggest challenges for any evaluation work is access: access to people and/or data. Often access is negotiated on the grounds of accepting that if someone is giving you money then there is a requirement to know whether that spending has achieved what it intended. Trying to engage organisations who are not benefitting from any new funding presents an extra challenge, however initial signs for the evaluation I’m working on are positive with a genuine desire from organisations to improve what we know about ‘what works’ for the sector.

Similarly, engaging those who directly, or indirectly, might be affected by a new intervention and who have ‘lived experience’ of it, can add greatly to all aspects of the evaluation, from design and fieldwork through to analysis and interpretation. ‘Selling’ this idea to funders, researchers, service users and other organisations also requires an articulation of the real value and benefits of evaluation.

The collaboration of evaluators working on different programmes is an extremely useful way of sharing ideas of what works and how to sell what’s great about evaluation and, refreshingly, of sharing that in the midst of trying to work it out yourself, rather than at the formal, polished launch of findings. Perhaps we’re a collaboration of evaluators?

Jon Adamson
Associate Director 0116 229 3300 | jon.adamson@cfe.org.uk