A new era of evaluation?
Evaluation – the act of assessing the successes (or failures) of an intervention and understanding the reasons or processes that led to outcomes – is a common requirement for many organisations and a core part of our consultancy work at Rocket Science.
There are many different evaluation approaches and tools, but fundamentally, evaluation tends to do at least one of the following:
- understand how well something works and what the impacts are (impact evaluation)
- understand how an intervention works (process evaluation)
- understand the economic value or return on investment of an intervention (value for money evaluation).
It is easy to think of evaluation as an apolitical idea, that while tools and approaches may develop and change, the overall purpose is the same. However, this is not the case. The ways we evaluate, the reasons why, and the expected outcomes changes from project to project, organisation to organisation, fund to fund, and government to government.
Why does this matter now?
The Magenta Book – the UK Government’s guidance on evaluation – was first published in 2011. There have been few changes since then, the most notable being the addition of information around evaluating AI interventions in late 2024.
This is all about to change again with a current stakeholder engagement exercise to update the Magenta Book deemed “an opportunity to enhance the relevance and effectiveness of Government evaluation guidance for government social researchers, and organisations who undertake independent evaluations for government.”
Additionally, as part of the ‘transition year’ of major UK Government funds, evaluation requirements for large funds such as UKSPF are being simplified.
The UK Government has also recently reviewed approaches to data sharing within the public sector, aiming to improve the ability of organisations to more easily share and “harness the power of data for economic growth, support modern digital government, and improve people’s lives”.
Rocket Science’s preferred approach to evaluation
Rocket Science is a business with a social purpose, working to tackle inequalities in pursuit of a healthier, fairer, more inclusive society for all. Because of this commitment to tackling inequalities, we find that evaluation is done best when it as an ongoing, active part of project and programme delivery. For example, where we work with organisations as ‘action learning partners’, supporting them to develop ideas from inception by co-creating theories of change and evaluation frameworks, evaluation can shape the development of projects and support organisations to reach the outcomes they intended in ways that are more efficient and effective.
Where evaluation is seen as a tick-box exercise, organisations can instead feel it is a burden, something that takes away from the core day-to-day delivery of projects. This can often be the case with statutory evaluations of government funds with extensive monitoring requirements that can feel unnecessarily complex and detailed – UKSPF is probably the one we hear most complaints about! Or it can be the case where evaluation is done retrospectively and the chance to iterate on the delivery of interventions to address inequalities is lost.
Our views on how to improve
There are four key areas of consideration in the Magenta Book review. Our thoughts about what is important and what could change are outlined below.
Areas of consideration in Magenta Book review | Rocket Science’s views on how to improve |
Value for money evaluation. | Value for money evaluations must consider social return on investment, taking a wide-ranging approach to understanding potential benefits. Without this, it is easy to prioritise funding towards projects that have immediate and larger scale economic returns on investment. Often that means funding will be channelled to places where the potential returns are greater and can replicate rather than address existing inequalities. At Rocket Science we prioritise using a social value approach that builds in considerations of equity and supports more detailed reflection of the benefits of interventions not just to the organisation or funder, but to wider society. |
‘Test and learn’ evaluation approaches and other associated terms such as prototyping. | Test and learn approaches must commit to process evaluation from the outset, understanding it as an embedded element of pilot and prototype approaches. It is particularly important that organisations are supported to understand the acceptability of failing. As the adage goes ‘you learn more from failure than you do from success’. We find that building in time for regular reflective sessions with delivery teams provides a safe space for learning from both failures and successes. |
Aligning evaluation and benefits realisation, including the distinctions between the two and how they complement and work together. | Benefits realisation is about tracking and ensuring outcomes and impacts (benefits) are achieved. Evaluation is about assessing what worked or didn’t work, and why.At Rocket Science, we already see the two concepts as being inherently inter-related. Our commitment to action learning and embedding evaluation from the outset supports this. |
Research transparency, including analysis replicability, availability of data/code, and use of open repositories. | Being transparent about evaluation processes is an important element of what we do. This means being transparent about how and why things were done and about the gaps and challenges in data collection and analysis activities. More importantly, is the consideration of open repositories and the possibility of building in efficiencies in evaluation that will provide more time for project learning and delivery rather than the gathering of data. For example, across our evaluation work with Northumbria Police, we have seen the value of developing multi-agency data sharing agreements both in creating efficiencies in delivery and improving the robustness of evaluation. |
If you have additional thoughts or ideas, you can either submit your own response directly here, or send thoughts to us via this short Form before 10am on Thursday 15th May and we can compile and send back by the deadline.