Skip to main content

Evaluation

CEDAR evaluates programmes, interventions, service delivery models, and similar projects. We usually work in partnership with services – not only within the NHS but also with local government and/or third sector organisations – often operating within complex systems.

What is evaluation?

We have found that the label “evaluation” has various interpretations. In CEDAR, evaluation refers to projects in which we explore:

  • how and/or why an intervention achieves its outcomes
  • contextual factors that may influence processes and outcomes
  • who is impacted by (or contributes towards) changes that are occurring
  • unanticipated consequences of the intervention – whether direct or indirect, positive or negative.

Evaluation methodology

CEDAR’s evaluation projects typically employ mixed methods – integrating quantitative and qualitative data collection, analysis, and interpretation. Triangulating data from a range of sources validates findings and provides insight from different perspectives.

We engage with frontline delivery staff and managers as well as service users and other key stakeholders. We examine the results of numerical measurement and statistical analyses alongside themes identified through interviews, focus groups and surveys. Projects sometimes include an economic evaluation or an indication of service sustainability, for example by estimating the social return on investment (SROI).

CEDAR's expertise

CEDAR’s expertise and experience includes the following:

  • iterative development and refinement of logic models to illustrate theory of change and to test assumptions
  • creation of evaluation frameworks and identification of measurement criteria to guide collection of relevant data
  • service evaluation of local quality improvement initiatives
  • formative evaluation and feedback from PDSA cycles (Plan, Do, Study, Act)
  • evaluation of the scale and spread of successful initiatives to other contexts (geographical, organisational, or cultural)
  • identification of barriers and facilitators – factors thought to help or hinder
  • process evaluation of complex interventions – focusing on implementation (such as fidelity of delivery), mechanisms (causal inferences), and context. Process evaluations can be a useful adjunct to randomised controlled trials.
  • impact evaluation – for example using process tracing and contribution analysis to estimate the probability that an effect may be caused by a specific factor (or combination of factors)
  • application of mid-level theory and validated tools, such as Normalisation Process Theory (NoMAD questionnaire); the RE-AIM framework (Reach, Effectiveness, Adoption, Implementation and Maintenance), or other models of behaviour change.