What is the purpose of evaluating care?

Published February 2020

Show

What is the purpose of evaluating care?
This guide aims to help practitioners measure and evaluate the impact of personalised care programmes, initiatives or new ways of working. It is for anyone who is involved in delivering a personalised care intervention or initiative at a local level, e.g. commissioners, performance or data managers, operational managers, lead professionals or practitioners in health, local government or voluntary and community sector organisations.

N.B. From this point on, personalised care programmes and initiatives will be referred to as interventions.

What is personalised care?

Personalised care means people have choice and control over the way their care is planned and delivered. It is based on ‘what matters’ to them and their individual strengths and needs. Personalised care is one of the five major, practical changes to the NHS that will take place over the next five years, as set out in the recently published Long Term Plan. Working closely with partners, the NHS will roll out personalised care to reach 2.5 million people by 2023/24 and then aim to double that again within a decade.

Universal Personalised Care [1] sets out the evidence base for these changes, including how personalised care could help to reduce health inequalities. In England the mortality gap between the richest and poorest areas is over seven years for women and nine for men. The evidence shows that levels of knowledge, skills and confidence to manage their health tend to be lower for people with lower incomes and lower levels of education.

The guide helps you work through a series of steps to plan and carry out an evaluation.

Contents

  • Developing interventions in evaluating personalised care
  • Designing the evaluation of personalised care
  • Deciding what to measure in your personalised care evaluation
  • Next steps in evaluating personalised care
  • Conclusion
  • Further reading
  • Downloads

Co-production and evaluating personalised care

Local evaluation leads should strive to work together with people with lived experience in designing, carrying out and analysing the results of evaluation. This is sometimes referred to as co-producing your evaluation.

It is important to include people with lived experience in:

  • Developing your theory of change (see 'developing interventions in evaluating personalised care'), including the outcomes that you are seeking to achieve for people who use services and carers.
  • Developing research tools, such as interview guides and surveys.
  • Being involved in conducting interviews.

Developing interventions in evaluating personalised care

Before thinking about how to measure outcomes or evaluate your work, it is important to understand the nature of the intervention you intend to measure. Only by understanding what the intervention involves, and how it will lead to changes for people and services, can you work out what you need to measure and how.

What is a logic model?

We recommend using a logic model to help develop your intervention. A logic model is a diagram that describes your theory of change – how your interventions will bring about the desired outcomes. It usually describes the five points below.

  1. Context: What needs to be in place locally to support a successful intervention.
  2. Inputs: The things you put into the intervention, such as people’s time, money, infrastructure to make it work.
  3. Activities: What you actually do to make the intervention bring about change.
  4. Outputs: The immediate results of the intervention, e.g. number of people receiving a single care plan, number of staff trained.
  5. Outcomes: The short, medium and long term.

The value of the logic model is that it helps you to think through the details of how you expect your intervention to work and explore the assumptions that lie beneath this.

We recommend that you work in partnership with your stakeholders and people who use services and carers to develop the logic model. Co-designing the logic model allows you to explore the different views, values and priorities of each stakeholder and what they would like to gain from both the intervention and the measurement or evaluation of it.

A good logic model emerges from asking questions. Some people prefer to start with the question about outcomes and work backwards to talk about inputs, but we think the most important issue is to cover all the key questions.

Key questions

What is the intervention trying to achieve?
This is an important question; it enables you to explore how the intervention intends to bring about positive changes to peoples’ lives – or outcomes.

Be as specific as possible about what these outcomes are. You may want to achieve ambitious outcomes for people, but it is important to challenge yourself and think hard about whether this intervention will really bring them about.

You also need to be specific about when you think the outcomes will be achieved by. When can they realistically be delivered?

What are the immediate changes you expect to see?
These are called output measures. They are often things you can count which can tell you if certain changes are happening, e.g. the number of staff who have been trained, the number of people who have been referred into a new service, the number of people receiving a single care plan.

What will need to happen to bring about these outcomes?
This question seeks to pin down what the essential features of your intervention are – or activities. This could be about a new approach to assessments, a new way to organise a team, a new approach to working with people, a new IT system. It is important, again, to challenge yourself and really think hard about what activities are associated with this intervention.

What resources are needed to deliver the intervention?
The resources which go into an intervention are sometimes called inputs. They are the human and other resources, such as new equipment or buildings, which are needed to support an intervention. It’s important to be clear what these are from the outset. If you want to work out the costs of intervention, and work out whether it brings about savings, you will need to pin down all the inputs e.g. staff time.

Unintended consequences

When we design a new model of care or service, we are often testing a hypothesis – if I change X, Y will happen. However, people’s lives are complex and so is the health and care system.

Your intervention may have impacts beyond those you intend, such as knock-on effects in the local health economy. Developing a theory of change and considering the various stages in which your programme is intended to operate will help you identify additional effects. These can then be incorporated into your evaluation.

Table 1: Example of expected outcomes and unintended consequences [2]
Achieves expected outcomesUnintended consequences
More knowledge, skills and confidence Identification of unmet needs and more use of services
Person achieves their goal Long-term dependency is created
Fewer crises and less unplanned use of services Demand on VCSE increases

Practice example - Wessex Academic Health Science Network

The health and social care system has identified a number of actions to prevent ill health and to promote healthy choices; education and active support for self-care and self-management; and action to promote mental wellbeing. One of the prioritised projects of a Social Prescribing service will support local people to stay well and is focused on the most vulnerable people in the local population. It is a key component in the delivery of the NHS Long Term Plan and local commissioning strategies.

  • OpenInputs

    Human resources:

    • project support
    • social prescribers for each locality
    • integrated support from community teams and primary care.

    Non-NHS resources:

    • engaged in third sector
    • contribution of service users and carers to service design.

    Estates:

    • space for locality hubs
    • agreed pathways for signposting and guided conversation activities
    • agreed referral criteria
    • adapted IT systems.

    Specialist support:

    • evaluation
    • IT infrastructure
    • communications
    • staff training and education.

  • OpenActivities

    Create the infrastructure and processes to support healthier lifestyles and living, focused on the needs of distinct populations across the localities. To include:

    • action planning for each locality
    • creating access to third sector resources
    • co-ordinating resources and activities
    • greater access and visibility of patient data at assessment
    • menu of support providing individualised patient care, including access to other services such as IAPT and wellbeing services.

  • OpenOutputs

    • increased use of social prescribing – social prescribers embedded in all locality teams
    • around 8–10% of the population accessing social prescribing services
    • increased numbers of staff trained in social prescribing.

  • OpenOutcomes

    • improved sense of patient wellbeing reported
    • patient realising desired outcome
    • patients engaged in self-management
    • confident and skilled workforce in social prescribing
    • reductions in unnecessary GP appointments, A&E attendances, etc.

  • OpenImpacts

    • Positive change in health behaviour, resulting in reduced percentage of people requiring health and social care interventions.
    • Improved sense of wellbeing pre and post intervention in the short, medium and long term.
    • A change in the way people access health and care services.
    • Decreased unplanned long-term health care utilisation, resulting in a return on investment of X.

Designing the evaluation of personalised care

The word evaluation refers to the making of a judgement about why something turned out the way it did. It is a step on from just measuring outcomes and is often used to make decisions about whether to continue a service or scale up a pilot.

Robust evaluation tells us not only whether an intervention worked, but also why and how. This helps us to learn lessons for spreading successful interventions and developing new ones.

In order to know whether you are on the right track to achieve your goals, you will need to decide on a few key questions, and collect evidence to answer them.

Once you know what questions you are seeking to answer, you can than work out the right approach to the evaluation.

Key questions

  • Has the personalised care intervention improved outcomes for people who access care and support?
  • What outcomes have been achieved e.g. improved wellbeing, reduced social isolation, people having greater choice and control over decisions about their care?
  • How is your intervention working for people from different groups in the local population, including people who experience health inequalities?
  • How will you identify people who will benefit from the intervention e.g. through risk stratification, practitioner selection, assessment of frailty?
  • Have you changed the way you deliver services in the ways you expected?
  • What skills and capabilities do staff need to deliver this intervention?
  • Does the system have enough capacity to deliver the interventions?
  • Were people who use services involved effectively in the intervention?
  • Have different professionals worked well together to deliver the intervention?
  • Has the intervention reduced demand for certain services, including more intensive statutory services?

Considerations

  • OpenCounterfactuals

    This means showing whether you are making a difference. It refers to the situation at the beginning of the process. You need to know this as it’s the starting point against which you observe changes over time. The counterfactual refers to what is likely to have happened if you had not intervened. As this can’t be directly observed you have to make reasonable assumptions to estimate it.

    Understanding how much of any change was the result of your programme is called attribution. The critical question is: to what extent can you be sure that the impact you are observing relates to your intervention?

    Establishing a credible counterfactual is crucial to ensuring that the impact you are evaluating can be attributed to your intervention, and not to other factors. Ways to establish a counterfactual include:

    • identifying a comparator (control group) with similar characteristics and historical trends but where the intervention was not implemented; or
    • using pre-intervention as the baseline and assuming the historical trends would have continued if you had not introduced the intervention.

  • OpenStrength of evidence

    There is much debate about what constitutes a robust enough evaluation. Randomised control trials are widely regarded as the ‘gold standard’ for levels of robustness. Before and after evaluations may tell you little about impact but can provide useful insights on how the intervention was implemented.

    The key thing is to agree the evaluation methodology with your local partners and the audience who you need to influence with your results. You will need to be clear about the limitations of the approach you have chosen and consequently in how you are able to present and use the results.

  • OpenSample size

    One of the potential pitfalls of any evaluation which seeks to establish the impact of an intervention is sample size. If there are not enough participants, it is hard to have confidence in the results. Small sample sizes increase the probability that the impact of the intervention will not be detected.

    For a before and after study to provide at least preliminary evidence, the final sample after attrition (which means after any participants have dropped out or are no longer in the study) must include at least 20 participants. For comparison studies, there will need to be at least 20 in the control group. The Early Intervention Foundation – a national What Works Centre - recommends the use of larger samples as a sample size of 20 will only detect very large effects.

  • OpenProcess evaluation

    The most useful evaluations seek to explain both whether the intervention worked and why it worked (or not). This is important to improve the intervention and to understand anything that may have hindered it. This form of evaluation is usually called a process evaluation and seeks to understand how an intervention is delivered and why some aspects of it work well and why others don’t.

    However, process evaluation is not a substitute for measuring outcomes. Most evaluations combine elements of impact and process evaluations.

  • OpenImpact evaluation

    Most evaluations seek to find out if an intervention has had an effect on some observable outcomes, such as improved wellbeing, reduced social isolation or improved health. Sometimes called an impact or summative evaluation, this type of evaluation can be seen as a ‘summing up’ of the overall effect of the intervention. This type of evaluation might show whether the intervention worked and met its objectives, what improvements, if any, have been made. In demonstrating impact on people’s use of the system, generally we tend to use one of two comparison methodologies – before and after or matched control.

Selecting the evaluation approach

There are many approaches to evaluation, sometimes referred to as evaluation design. In thinking about which approach to use, it is helpful to think about the following questions:

  • What is the question you need to answer, and who for?
  • What resources do you have locally to deliver the evaluation? Think about who will co-ordinate the different activities, who will develop data collection tools, who will carry out interviews or administer data, and who will analyse the results? Do people locally have the right expertise and capacity? If they don’t, what is missing
  • What are the timescales for the evaluation? When do you need results by?
  • Are the time and cost of the evaluation justified by the scale of the benefits that you are expecting to show?

Some of the main evaluation methodologies are described below. It is important to note that they all have a use for different circumstances and different audiences. Using several methods – known as a mixed methods approach - will add credibility and boost your confidence in your findings.

N.B. The types of methodologies are ordered in order of robustness – ‘Randomised control trial’ being the most robust.

Options for evaluation design

  • OpenRandomised control trial

    An RCT is a type of scientific experiment which aims to reduce bias when testing a new treatment. The people participating in the trial are randomly allocated to either the group receiving the treatment under investigation or to a group receiving standard treatment as the control.

    Pros:

    • Offers the most robust, reliable findings.

    Cons:

    • Can be expensive to run and poor at taking context into account, e.g. culture and local environment.
    • Potential ethical issues.
    • Difficult to use in complex systems and multifactorial interventions.

  • OpenDifference in difference/matched control comparison

    Difference in difference adjusts for factors that affect both groups, improving the robustness of the results.

    Matched control is when the population receiving the intervention is compared against an appropriately matched comparator population.

    Pros:

    • Can provide reasonably strong evidence of relationship between interventions and observable outcomes.

    Cons:

    • Can be quite expensive to run, and usually requires support from independent experts in evaluation and statistics.

  • OpenBefore and after comparison

    Data is collected for a sample of people who take part in an intervention to see what changes have happened

    Pros:

    • Can help you understand trends over time.
    • Can be cheap to run.

    Cons:

    • Not very good for inferring causation. Affected by regression to the mean.

  • OpenCase study

    In depth, usually retrospective, write up of an experience of people experiencing an intervention.

    Pros:

    • Can provide you with a rich picture of how someone has experienced an intervention.

    Cons:

    • Doesn’t allow you to understand changes over time.
    • Can be very subjective.

  • OpenInterviews, observation and focus groups

    These approaches can provide detailed information about how a programme is working in practice, by giving you insight into the attitudes, opinions and experiences of affected groups.

    Pros:

    • Provides a rich picture of impact on people and practitioners.
    • Generates early learning to support continuous improvement or roll out/scale up.

    Cons:

    • Usually a snapshot in time.

Practice example - Using a control group in a personalised care evaluation

The social prescribing scheme in City and Hackney [3] has been operating since 2014, with funding for the evaluation from the CCG and The Health Foundation. The study includes a matched control group and evaluates the effects of social prescribing on individuals, primary care awareness of relevant community issues/resources and costs associated with the services. The study followed-up with patients 12 weeks post-referral and eight months post-referral.

One control group was randomly selected from neighbouring wards based on age and condition, with the second group sourced from anonymised electronic patient record data sets.

Prior to referral to the social prescribing service, the control group had an average of 8.6 GP appointments a year, with those referred an average of 11.5 appointments a year. Eight months post referral, the control group had an average of 14 appointments a year, with those referred an average of 12 appointments a year.

There were no significant changes in general health, wellbeing, anxiety, depression, social integration or health care resource use over time in either the social prescribing or the control groups.

The study recognises that the impact of the service was limited but highlights a number of limitations within the area in question, identifies a limited number of contacts with link workers, and therefore highlights the need for better application of social prescribing in City and Hackney.

Deciding what to measure in your personalised care evaluation

Once you have defined your logic model it is time to start thinking about how you will measure the key elements of it. There are two key areas to think about:

  • Measuring activity and output measures.
  • Measuring outcomes for people, carers, practitioners and the system.

Definitions

  • OpenWhat is an activity measure?

    An activity measure refers to the number of things that are done and can be counted. Counting activity will enable you to demonstrate spread and scale of your intervention. For example, the number of referrals made would be an activity measure.

    It can help you to uncover implementation issues that might need addressing, such as a lack of referrals from a particular GP practice, or pressures in the system that lead to delays.

    Counting activity doesn’t directly help you understand the impact of the work on people, it just tells you about the scale, location and nature of personalised care activity.

  • OpenWhat is an output measure?

    An output measure refers to the number of things that are produced and can be counted. Again, counting outputs will enable you to demonstrate spread and scale of your intervention, including take-up. For example, attendance at a service to which an individual has been referred would be an output measure.

    This can be particularly helpful information. It can help you compare the number of referrals (activity) with the number of attendances (outputs). This can show who is taking up your intervention and who is dropping out.

    Counting outputs doesn’t directly help you understand the impact of the work, but together with counting activity it does allow you to assess whether interventions are being taken up.

  • OpenWhat are outcomes?

    Outcomes refer to the way something turns out, a consequence of the intervention that has been taken up. In personalised care we are interested in outcomes for a range of different people and systems:

    • people who use services
    • carers
    • practitioners who deliver services
    • the health and care system.

    Outcomes are also referred to as impacts and, although some believe impact is a much longer-term effect, in this document the words are used interchangeably.

Measuring your activity outputs and outcomes

Measuring activity and outputs

Activity and output measures can provide an early indication of how well the implementation is going but predominantly tell us how quickly the intervention is being rolled out and how many people have accessed it.

For personalised care, there are a number of mandatory activity and output reporting requirements from NHS England. These currently are focused on the number of personal health budgets and are submitted by your local Clinical Commissioning Group to NHS Digital on a quarterly basis.

An important part of measuring outputs is to understand who is taking up your intervention, including whether you are reaching groups affected by health inequalities.

It is important to continue to monitor outputs as you implement and there will be some specific ones that make sense to your work. For example, in Gloucester they have been interested in monitoring the number of pre-payment cards used for personal health budgets. Try to keep monitoring of outputs to a minimum. There can be a tendency to monitor a lot of outputs because this kind of data is fairly easy to obtain.

Measuring outcomes

We collect a lot of information on people every time they go to the GP or see a social worker. Almost all of this is focused on how many times people use services or information about their physical health status e.g. blood pressure or weight.

However, information on whether people feel better, happier or have had a good experience isn’t collected routinely at a local level, so it’s important to think about how to measure this before you start your intervention as you might need frontline practitioners to collect it from the beginning. This means they need to understand why you want them to collect it, because it might not normally be part of their job.

With personalised care we are interested in delivering three key outcomes:

  1. Improving the health and wellbeing of people, families and carers.
  2. Improving the experience of people, carers and practitioners.
  3. Improving use of health and care services.

Choosing measures across the outcomes

There are many tools you can use to measure impact. There are benefits to using validated tools that allow you to compare your data with other areas, but if the tool doesn’t measure what matters to you, then it won’t give you what you need.

There is a balance to be struck between collecting data covering all of the outcomes you think you might achieve and ensuring the collection doesn’t get in the way of the conversation. It can be tempting to require multiple questionnaires or tools, so you can understand - for example - loneliness, mental wellbeing and experience. This might not be cost effective and can have an adverse impact on people taking part

We recommend asking the following questions when deciding on the tool you want to use:

  • Can it work for the whole population, or any specific cohort you are interested in?
  • Is it validated?
  • Is there a licence fee?
  • How long does it take to fill in?
  • Is it in multiple languages?

We have made some suggestions below.

  • OpenHealth and wellbeing

    In the UK, the Office for National Statistics has defined wellbeing as follows:

    Wellbeing, put simply, is about ‘how we are doing’ as individuals, communities and as a nation and how sustainable this is for the future.
    We define wellbeing as having 10 broad dimensions which have been shown to matter most to people in the UK as identified through a national debate. The dimensions are: the natural environment, personal well-being, our relationships, health, what we do, where we live, personal finance, the economy, education and skills and governance.

    Examples of measures of wellbeing include:

    • Mental wellbeing
    • Being connected to others (loneliness and isolation)
    • Wellbeing at the end of life
    • Physical health
    • Health related quality of life
    • Knowledge, skills and confidence
    • Making a contribution (employment and/or volunteering)
    • Housing
    • Income (poverty and deprivation)

  • OpenExperience

    The person’s experience encompasses the range of interactions they have with the health and care system. Experience is made up of many things including the personal interaction between the person, their family and the practitioner as well as the process they go through. Experience measures include:

    • People’s experience of care and support
    • Experience of personal budget process
    • Experience of carers
    • Experience of caring for someone at the end of life
    • Experience of practitioners delivering personalised care

  • OpenUse of health and care services

    A lot of information on service use is collected routinely by health and care services. You may also be able to make use of information collected by the voluntary sector. However, it can be difficult to access data held in separate systems, and information sharing arrangements might not be in place.

    It’s important to try to understand the impact of your intervention across health and care services. Often, there is a focus only on the use of acute hospital care, particularly unplanned care. Personalised care is a complex set of interventions and is intended to have a wide range of benefits – such as reducing use of GP time for non-medical needs or preventing admissions to residential care.

    We should remember that activity and outputs – the number of referrals to a service, or the number of times a service is used - are different to cost. Costs will vary from place to place and there are different ways of measuring cost. It is important to agree which cost calculations you are going to use locally. The Personal Social Services Research Unit publishes unit costs for health and social care services which can be used to help you estimate costs.

    Possible measures of use of health and care services include:

    • Long term and intensive health and social care packages
    • Short term social care packages
    • Primary care services
    • Medication
    • Unplanned admissions
    • A&E visits
    • Planned admissions
    • Community health services
    • Mental health services
    • Voluntary sector services

  • OpenMeasuring value for money

    You might also want your evaluation to look at value for money. This can be assessed against several criteria:

    • Economy: minimising the cost of resources used or required (inputs) – spending less.
    • Efficiency: the relationship between the output from goods or services and the resources to produce them – spending well.
    • Effectiveness: the relationship between the intended and actual results of public spending (outcomes) – spending wisely.

    The Better Evaluation Organisation has summarised the main methods used to assess value for money, one of which is cost–benefit analysis. This asks if the benefits of your intervention outweigh the cost. To answer this question, you need to gather quantitative data to assess if you have had an impact. The data can be used to express the impact in monetary terms and compare this sum with the cost of the intervention to derive the cost–benefit ratio. HM Treasury’s Green Book – a widely used guide for the appraisal and evaluation of policies, programmes and projects – features a cost-benefit analysis model frequently used in the public sector.

    Other techniques have also been developed specifically for social and environmental value such as social return on investment (SROI).

Practice example – Evaluation of health and wellbeing hubs in South Devon and Torbay

In their evaluation of how the health and wellbeing hubs are impacting on personalised care for older people with complex health conditions, South Devon and Torbay were keen to embed evaluation of outcomes and impacts from the beginning [4]. They collaborated with the University of Plymouth who developed a Researchers in Residence model, which takes a participatory, action-orientated approach to evaluation.

The researchers formed an Evaluation Group that brought together senior managers and the voluntary sector providers from both South Devon and Torbay to discuss how to undertake a robust evaluation of the Wellbeing Coordination service, how to make data collection methods more aligned across Torbay and South Devon and how to share information and learning from the evaluation.

Three key achievements of the group have been:

  1. Ensuring a uniform approach to asking consent from service users, so they could collate health and social care data from across the system.
  2. Ensuring an information governance agreement was in place to allow both providers to share data and learning between organisations across the system.
  3. Strengthening partnership working across the voluntary sector. This enabled the Trust to get some good data across the health and social care system with excellent follow up, particularly for South Devon.

The evaluation included use of the Short Warwick-Edinburgh Mental Wellbeing Scale and Patient Activation Measure© (PAM), alongside analysis of service use.

Bringing it all together – an evaluation framework

An evaluation framework is a summary document which sets out how you are going to do your evaluation.

It can help you and your colleagues to focus on the key questions you are trying to answer and keep on track with collecting data.

It will usually include:

  • What: evaluation questions you are trying to answer.
  • Where: will the data be collected and by whom?
  • When: over what time period?
  • How: measurement tools and methods will be used e.g. surveys, focus groups.
  • Who: is the data collected from and who will gather and analyse data?
  • Open Example: Evaluation framework from Gloucestershire’s Integration Accelerator Pilot

    Project Name:

    Integration Accelerator Pilot

    Inputs:

    • Joined up assessment process between health and social care
    • (Initial cohort focus – people with serious mental illness who have funded care packages from 2gether)
    • Integrated budget or personal health budget for some people
    • Pre-paid cards used by people
    • Signposting to voluntary sector organisations

    Outputs:

    • Number of personalised plans produced
    • Number of integrated budgets
    • Number of personal health budgets
    • Number of pre-paid cards
    • Number of positive comments from people

    Outcomes:

    Individuals/families/carers:

    • Improved wellbeing
    • Improved experience of integrated assessment process and care including:
      • Health needs are considered earlier in the assessment process
      • Choice and control over their outcomes, including the offer of an integrated or personal health budget where required
      • Increased knowledge, confidence and skills to manage their condition
      • Carers needs are taken into account

    Practitioners/staff:

    • Improved job satisfaction/morale

    What is being evaluated?

    1. Whether there is an improvement in people’s wellbeing as a result of a joined up and personalised assessment and care planning process
    2. What the experience of the joined up and personalised assessment and care planning process is for:
      1. people and carers
      2. practitioners
    3. Demand and cost of services and whether earlier assessment of health needs reduces demand over time.

    How is it being evaluated?

      1. SWEMWBS and (where appropriate) EQ5D questionnaires at start of process, and at 3, 6, and 12 months
      1. Qualitative interviews with people and their carers 6 months after they take up a budget
      2. Qualitative interviews with practitioners
      1. Linking health and (if possible) social care data using the pseudonymised NHS number to enable understanding of:
        • Cost of social care package
        • Acute, community, mental health and primary care service usage and cost
        • Medication prescribing and cost.
        • Compare 2 years pre-intervention, baseline, 6 months and 12 months post intervention to understand shift in services.

Next steps in evaluating personalised care

Governance

You will need to consider how to keep track of your evaluation and who needs to be involved in making decisions. If you don’t already have a suitable group, you may need to put one in place. There might need to be a risk assessment to help you decide what to do if things don’t go to plan.

There are some specific issues that you should consider in planning the evaluation. These include ethical approval, information sharing and consent.

  • OpenEthical approval

    If you are interviewing or observing patients or the public or looking at confidential documents such as patient records as part of the evaluation, you may need to obtain ethical approval. Your organisation or funder may have a research or governance manager who can guide you through the process. It can take several months so you should start as early as possible.

    You will need to prepare, among other things, a protocol (a summary of the evaluation and how it will be carried out), a plan for the evaluation, information about the evaluation for all participants, consent forms and an explanation of the procedure for obtaining consent, and details of the skills and qualifications of the evaluation team. All information for participants should be written in plain English, and other languages as appropriate.

    Evaluations of service improvements that use routine and anonymised data do not usually require ethics approval. You should seek guidance from your local research ethics committee. The NHS Health Research Authority also provides a useful guidance leaflet, which recognises that decisions about the need for approval are not always clear.

    You must take all reasonable steps to make sure that the respondents are not adversely affected by taking part in the evaluation. You must keep their responses confidential, unless you have their permission to do otherwise, and you must not do anything with their responses that you have not informed them about. You should also consider what will happen at the end of the evaluation – how long will you need to hold the data.

  • OpenInformation sharing

    It is possible to share information in a way which enables us to understand the impact the change is making, we just need to think about safe information sharing right at the start. Information sharing law is complex and changes regularly. It is often not definitive and leaves a lot of room for individual interpretation. It’s not surprising that information-sharing leads interpret the law in different ways.

    Get to know your information-sharing leads and how they go about making decisions. Getting together to discuss the latest legal changes or definitions is a good way to break down barriers and for people to feel they aren’t alone in making the decision. Make it your mission for information-sharing leads to be heard, to feel supported and to collaborate in the decisions they take.

    Try and talk about information sharing not information governance. If we start from the premise that information should be shared to support people to live the lives that matter to them, then solutions seem to come easier. That doesn’t mean we throw caution to the wind and forget security. However, it helps to stay solutions-focused and to always ask the question – ‘how can we share this information safely’? Always bring it back to your values and principles of supporting people to live the lives that matter to them.

  • OpenConsent

    Before involving people in your evaluation or accessing their data, you will need to consider what consents are needed. This depends very much on what data you are collecting and how you are using it, so it’s best to seek advice.

    Individual informed consent is the most robust way of getting access to data and involves each person being asked to sign a consent form giving you authority to use their information for your evaluation. Please see the sample consent form from Gloucestershire.

    Getting informed consent is possible in a small pilot (although you will need to account for a percentage of those who will decline, which can be up to 50%). However, it might not always be practical or necessary. For example, analysis of anonymised data on service use and costs might not require individual consent.

Reporting – making sense of the evidence

Once you have developed your theory of change, designed your evaluation and gathered the data to support it, it’s time to assemble the evidence. What do the results tell you about your theory of change? Do the results support each other or are there contradictions?

There are no hard and fast rules for drawing the data together. Focus on ensuring that you have answered your evaluation question and presenting a clear and honest narrative about your programme’s impact. It’s really important to make clear any limitations in the evidence, for example if you have only been able to use a less robust design.

Conclusion

We hope this guide helps your thinking on how to go about measuring impact and outcomes for personalised care.

Here are a few final tips:

  • Find out what’s happening locally - there might be other projects, services or pilots already measuring outcomes. Build on local enthusiasm and tools already in use.
  • Develop your logic model - bring together a range of stakeholders including people with lived experience to support collaboration and co-design.
  • Think about how to build in measurement from the beginning, and make it part of normal business.
  • Keep it simple - start small and just focus on measuring one outcome to begin with such as wellbeing or costs.
  • Know your local audience – choose an evaluation approach that meets your local need and answers your local questions. You don’t have to do something academic or extensive if that doesn’t work for you.
  • Don’t be overwhelmed – it’s better to measure something than nothing and if the tool you’re using doesn’t work, choose something else.
  • Don’t forget activity measures – they help to demonstrate spread, scale and identify challenges.
  • Get to know your information governance colleagues – understand where they are coming from and focus on what’s best for the person as the centre of discussion.
  • OpenFurther reading

    • NHS Long Term Plan
    • Coalition for Collaborative Care
    • Co-production guide (NHS England, 2017)
    • SCIE Co-production training and resources
    • Your guide to using logic models (Midlands and Lancashire Commissioning Support Unit, 2016)
    • What is wellbeing? (What Works Centre for Wellbeing, 2019)
    • The Green Book: appraisal and evaluation in central government (HM Treasury, 2018)
    • Developing an integration scorecard: A model for understanding and measuring progress towards health and social care integration (SCIE, 2017)
    • Finance, Commissioning and Contracting Handbook (NHS England and Improvement, 2016)
    • Measures for Person-Centred Coordinated Care (Plymouth University, 2016) [Accessed 27 August 2019]
    • Shine 2014 final report Social Prescribing: Integrating GP and Community Assets for Health (The Health Foundation, 2014)
    • Using research evidence: A practical guide (J. Breckon, 2019)
    • Evaluation: What to consider (The Health Foundation, 2015)

Appendix

  • OpenBefore and after comparison

    The most straightforward methodology is to compare the data for your cohort before and after an intervention. However, this design lacks robustness. It does not show whether any changes observed are really the result of your intervention. The changes could have been the result of something else that happened at the same time. Or the changes might have happened even if there had been no intervention of any kind.

    An important reason why changes tend to occur naturally is called regression to the mean. This is a technical way of saying that things tend to even out over time. For example, if you select a group of people for your intervention that have had a high level of use of health services in the last year, you should expect their use to be lower in the following year.

    Another reason why before and after studies lack robustness is that the period of follow up is often very short – for example when information is only collected at the start of an intervention and immediately afterwards. It’s usually better to try to collect data for a long baseline period before the intervention, and to continue collecting data for an agreed follow-up period after the intervention has ended.

  • OpenMatched control comparison

    A better way to assess whether an intervention has worked is through an experimental, or quasi experimental design. Such designs seek to find out whether something you are implementing works by delivering the new intervention to one group of people and comparing that to a group – called a comparison group - which is not receiving the new approach or is instead receiving ‘business as usual’.

    The comparison between a group receiving the intervention and one continuing as normal allows us to estimate what would have happened without the intervention.

    Ideally, the people in your comparison group should be as similar as possible to those in your intervention group in terms of their characteristics that might affect performance (e.g., their age, their health status, their income).

    This kind of study however can be difficult, expensive and often beyond the reach of many local areas. There are several practical challenges:

    • getting access to data from people who are not getting the intervention
    • ensuring that the two datasets have the same measures present in both, particularly if you are using measures that are not routinely collected; and
    • having access to the skills needed to carry out matching and do the kinds of analysis needed (including difference in difference analysis).

    It can be very helpful to collaborate with academic partners who bring extra skills. However, this will extend the time it takes to report and isn’t always appropriate for small scale pilots. Many sites are beginning to explore longer-term academic partnerships where relationships and methodologies can be built upon over time.

  • OpenMore about evaluation

    There are many useful publications which provide learning about evaluation methods.

    The Early Intervention Foundation have produced a guide on the six most common pitfalls affecting evaluation.

    • Pitfall 1: No robust comparison group
    • Pitfall 2: High drop-out rate
    • Pitfall 3: Excluding participants from the analysis
    • Pitfall 4: Using inappropriate measures
    • Pitfall 5: Small sample size
    • Pitfall 6: Lack of long-term follow-up

    Another well-known issue affecting evaluation is the placebo effect. For example, any improvements you observe in your evaluation could simply be due to people getting extra time and attention (this is also known as the Hawthorne effect).

Downloads

All SCIE resources are free to download, however to access the following downloads you will need a free MySCIE account:

  • Register now
  • Log into your account

Available downloads:

  • Evaluating personalised care
  • Download assets including directory of activity and outcome measures

What is the purpose of evaluating care in nursing?

Evaluation is important in healthcare because it supports an evidence-based approach to practice delivery (Moule et al 2017). It is used to assist in judging how well something is working. It can inform decisions about the effectiveness of a service and what changes could be considered to improve service delivery.

What is evaluation of care?

Health care evaluation is the critical assessment, through rigorous processes, of an aspect of healthcare to assess whether it fulfils its objectives. Aspects of healthcare which can be assessed include: Effectiveness – the benefits of healthcare measured by improvements in health.

What are the three main purposes of evaluation?

In general, there are three reasons why evaluations are conducted: to determine plausibility, probability, or adequacy. As discussed in the Constraints on Evaluations module, resources for evaluations are limited and determining the reason for the evaluation can save both time and money in program budgets.

What is the purpose of evaluation in public health?

The primary purposes of evaluation in public health education and promotion are to: (1) determine the effectiveness of a given intervention and/or (2) assess and improve the quality of the intervention.