How to use evaluation to help develop effective public engagement practice and to inform culture change.
It really helps to have numbers to impress our senior leadership team, but without the stories the numbers can feel really meaningless. Our approach has been to gather monitoring data, and use case studies to captivate people and bring the value of the work to life.
Why evaluation matters
The NCCPE is committed to the strategic use of evaluation. Engagement is a complex and challenging process to design and deliver, and embracing evaluation will help you to continually improve your work as well as evidencing if and how it has been valuable to all those involved.
Our top reasons to evaluate include:
Efficiency and efficacy:
Evaluation can bring useful intelligence to ensure activities are fit for purpose, and relevant and appropriate to your context, ensuring you don’t waste time and energy doing ineffective work.
Evidencing what the work has achieved and for whom.
Evaluation brings insight and learning to you and your team, ensuring that you reflect on your work, and improving your long-term approach to engagement and/ or culture change.
Types of evaluation approach
The NCCPE have based a lot of their work on a theory of change approach to planning and evaluation. A theory of change describes how you think change will happen within the context you are working in and helps you to consider the assumptions you are making. This approach enables you to plan your evaluation by considering which data to collect to explore if and how your current theory of change is valid for the context you are working in.
However, there are lots of other approaches to evaluation – from realist evaluation (that focuses on what works for whom and in what circumstances), to formative evaluation (which you do right at the start of your planning to help inform the approach you take to your project).
These split into two main areas – practical challenges and challenges with evidencing impacts:
Practical challenges are often down to lack of resources especially time, and lack of knowledge about how to evaluate well. This can sometimes lead to evaluation being an afterthought, and not being useful. Strategic planning from the beginning can minimise resources needed and ensure any evaluation that is done is effective. We recommend costing evaluation into your project plans to ensure that you have resource needed to do this well – many funders will expect to see this as an essential part of effective work. A common pitfall is not including enough funding to do this well.
Evidencing impacts provides several specific challenges. Given the complexity of the contexts within which engagement and culture change work take place, it can be difficult to prove attribution (crudely put, that Intervention A led to Outcome B). It can be useful to think of the contribution your work makes, rather than trying to prove that your work directly led to specific impacts.
Long term tracking of individuals involved in engagement work can be difficult, and the resources needed may outweigh the benefits of collecting the data. Determining suitable proxy indicators of impact can provide a helpful way of managing this. A proxy indicator is one you can measure more easily at the point of intervention, which provides confidence that impacts will happen (for example, drawing on existing data sets that link learning at an event, to if and how that learning may effect change).
Evidencing impacts in longer term projects (e.g. a culture change initiative) can be slightly easier, as the participants are available over a longer period of time. The EDGE tool is a classic culture change framework that you can use to assess how your work is going. Creating a baseline by surveying staff, and then repeating this survey biennually, is a great way to both inform the initial approach to culture change work, to assess if and how it is going, and to identify areas where more targeted interventions are needed
We found the evaluation process a bit overwhelming at first, but it provided some rich insights into what was working and why. We learned that we don’t have to evaluate everything or everybody to get the insights we needed.
Whether you want to develop the evaluation yourself, or work with an expert evaluator, then help is available to you. Don’t forget to engage with your public engagement professional if you have one – as they are likely to have evaluation expertise and may be able to inform your approach. They might even have existing resources and tools you can draw on, that have been developed within your institutional context.
Deciding whether to commission an evaluation expert will depend on the scale of your project, and the type of evaluation approach you want to take.
An evaluator may take several roles in your project. They might lead the design and execution of the evaluation, with you and your teams support, sometimes called collaborative evaluation; they might work alongside you and your team in a participatory evaluation – where you co-develop the approach; or you might lead the evaluation, with the evaluator providing advice and support as a member of your team. The approach you take will depend on the needs and interests you have – for example a participatory approach can help you develop your own skills and confidence in evaluation but is more time intensive.
Whether you go it alone, or bring in others to help, you will need to be clear about the purpose of your evaluation, the questions you hope evaluation will help answer, and the resource you can contribute to it.
Evaluation is not an optional extra, but an integral part of quality engagement and change making. Understanding enough about evaluation will help you make wise choices now and in the future about the strategic evaluation you need to do – and to decide when it isn’t a priority.