What is the purpose of evaluating care?
Published February 2020 Show
N.B. From this point on, personalised care programmes and initiatives will be referred to as interventions. What is personalised care?Personalised care means people have choice and control over the way their care is planned and delivered. It is based on ‘what matters’ to them and their individual strengths and needs. Personalised care is one of the five major, practical changes to the NHS that will take place over the next five years, as set out in the recently published Long Term Plan. Working closely with partners, the NHS will roll out personalised care to reach 2.5 million people by 2023/24 and then aim to double that again within a decade. Universal Personalised Care [1] sets out the evidence base for these changes, including how personalised care could help to reduce health inequalities. In England the mortality gap between the richest and poorest areas is over seven years for women and nine for men. The evidence shows that levels of knowledge, skills and confidence to manage their health tend to be lower for people with lower incomes and lower levels of education. The guide helps you work through a series of steps to plan and carry out an evaluation. Contents
Co-production and evaluating personalised careLocal evaluation leads should strive to work together with people with lived experience in designing, carrying out and analysing the results of evaluation. This is sometimes referred to as co-producing your evaluation. It is important to include people with lived experience in:
Developing interventions in evaluating personalised careBefore thinking about how to measure outcomes or evaluate your work, it is important to understand the nature of the intervention you intend to measure. Only by understanding what the intervention involves, and how it will lead to changes for people and services, can you work out what you need to measure and how. What is a logic model?We recommend using a logic model to help develop your intervention. A logic model is a diagram that describes your theory of change – how your interventions will bring about the desired outcomes. It usually describes the five points below.
The value of the logic model is that it helps you to think through the details of how you expect your intervention to work and explore the assumptions that lie beneath this. We recommend that you work in partnership with your stakeholders and people who use services and carers to develop the logic model. Co-designing the logic model allows you to explore the different views, values and priorities of each stakeholder and what they would like to gain from both the intervention and the measurement or evaluation of it. A good logic model emerges from asking questions. Some people prefer to start with the question about outcomes and work backwards to talk about inputs, but we think the most important issue is to cover all the key questions. Key questions What is the intervention trying to achieve? Be as specific as possible about what these outcomes are. You may want to achieve ambitious outcomes for people, but it is important to challenge yourself and think hard about whether this intervention will really bring them about. You also need to be specific about when you think the outcomes will be achieved by. When can they realistically be delivered? What are the immediate changes you expect to see? What will need to happen to bring about these outcomes? What resources are needed to deliver the intervention? Unintended consequencesWhen we design a new model of care or service, we are often testing a hypothesis – if I change X, Y will happen. However, people’s lives are complex and so is the health and care system. Your intervention may have impacts beyond those you intend, such as knock-on effects in the local health economy. Developing a theory of change and considering the various stages in which your programme is intended to operate will help you identify additional effects. These can then be incorporated into your evaluation. Table 1: Example of expected outcomes and unintended consequences [2]
Practice example - Wessex Academic Health Science NetworkThe health and social care system has identified a number of actions to prevent ill health and to promote healthy choices; education and active support for self-care and self-management; and action to promote mental wellbeing. One of the prioritised projects of a Social Prescribing service will support local people to stay well and is focused on the most vulnerable people in the local population. It is a key component in the delivery of the NHS Long Term Plan and local commissioning strategies.
Designing the evaluation of personalised careThe word evaluation refers to the making of a judgement about why something turned out the way it did. It is a step on from just measuring outcomes and is often used to make decisions about whether to continue a service or scale up a pilot. Robust evaluation tells us not only whether an intervention worked, but also why and how. This helps us to learn lessons for spreading successful interventions and developing new ones. In order to know whether you are on the right track to achieve your goals, you will need to decide on a few key questions, and collect evidence to answer them. Once you know what questions you are seeking to answer, you can than work out the right approach to the evaluation. Key questions
Considerations
Selecting the evaluation approachThere are many approaches to evaluation, sometimes referred to as evaluation design. In thinking about which approach to use, it is helpful to think about the following questions:
Some of the main evaluation methodologies are described below. It is important to note that they all have a use for different circumstances and different audiences. Using several methods – known as a mixed methods approach - will add credibility and boost your confidence in your findings. N.B. The types of methodologies are ordered in order of robustness – ‘Randomised control trial’ being the most robust. Options for evaluation design
Practice example - Using a control group in a personalised care evaluationThe social prescribing scheme in City and Hackney [3] has been operating since 2014, with funding for the evaluation from the CCG and The Health Foundation. The study includes a matched control group and evaluates the effects of social prescribing on individuals, primary care awareness of relevant community issues/resources and costs associated with the services. The study followed-up with patients 12 weeks post-referral and eight months post-referral. One control group was randomly selected from neighbouring wards based on age and condition, with the second group sourced from anonymised electronic patient record data sets. Prior to referral to the social prescribing service, the control group had an average of 8.6 GP appointments a year, with those referred an average of 11.5 appointments a year. Eight months post referral, the control group had an average of 14 appointments a year, with those referred an average of 12 appointments a year. There were no significant changes in general health, wellbeing, anxiety, depression, social integration or health care resource use over time in either the social prescribing or the control groups. The study recognises that the impact of the service was limited but highlights a number of limitations within the area in question, identifies a limited number of contacts with link workers, and therefore highlights the need for better application of social prescribing in City and Hackney. Deciding what to measure in your personalised care evaluationOnce you have defined your logic model it is time to start thinking about how you will measure the key elements of it. There are two key areas to think about:
Definitions
Measuring your activity outputs and outcomesMeasuring activity and outputsActivity and output measures can provide an early indication of how well the implementation is going but predominantly tell us how quickly the intervention is being rolled out and how many people have accessed it. For personalised care, there are a number of mandatory activity and output reporting requirements from NHS England. These currently are focused on the number of personal health budgets and are submitted by your local Clinical Commissioning Group to NHS Digital on a quarterly basis. An important part of measuring outputs is to understand who is taking up your intervention, including whether you are reaching groups affected by health inequalities. It is important to continue to monitor outputs as you implement and there will be some specific ones that make sense to your work. For example, in Gloucester they have been interested in monitoring the number of pre-payment cards used for personal health budgets. Try to keep monitoring of outputs to a minimum. There can be a tendency to monitor a lot of outputs because this kind of data is fairly easy to obtain. Measuring outcomesWe collect a lot of information on people every time they go to the GP or see a social worker. Almost all of this is focused on how many times people use services or information about their physical health status e.g. blood pressure or weight. However, information on whether people feel better, happier or have had a good experience isn’t collected routinely at a local level, so it’s important to think about how to measure this before you start your intervention as you might need frontline practitioners to collect it from the beginning. This means they need to understand why you want them to collect it, because it might not normally be part of their job. With personalised care we are interested in delivering three key outcomes:
Choosing measures across the outcomesThere are many tools you can use to measure impact. There are benefits to using validated tools that allow you to compare your data with other areas, but if the tool doesn’t measure what matters to you, then it won’t give you what you need. There is a balance to be struck between collecting data covering all of the outcomes you think you might achieve and ensuring the collection doesn’t get in the way of the conversation. It can be tempting to require multiple questionnaires or tools, so you can understand - for example - loneliness, mental wellbeing and experience. This might not be cost effective and can have an adverse impact on people taking part We recommend asking the following questions when deciding on the tool you want to use:
We have made some suggestions below.
Practice example – Evaluation of health and wellbeing hubs in South Devon and TorbayIn their evaluation of how the health and wellbeing hubs are impacting on personalised care for older people with complex health conditions, South Devon and Torbay were keen to embed evaluation of outcomes and impacts from the beginning [4]. They collaborated with the University of Plymouth who developed a Researchers in Residence model, which takes a participatory, action-orientated approach to evaluation. The researchers formed an Evaluation Group that brought together senior managers and the voluntary sector providers from both South Devon and Torbay to discuss how to undertake a robust evaluation of the Wellbeing Coordination service, how to make data collection methods more aligned across Torbay and South Devon and how to share information and learning from the evaluation. Three key achievements of the group have been:
The evaluation included use of the Short Warwick-Edinburgh Mental Wellbeing Scale and Patient Activation Measure© (PAM), alongside analysis of service use. Bringing it all together – an evaluation frameworkAn evaluation framework is a summary document which sets out how you are going to do your evaluation. It can help you and your colleagues to focus on the key questions you are trying to answer and keep on track with collecting data. It will usually include:
Next steps in evaluating personalised careGovernanceYou will need to consider how to keep track of your evaluation and who needs to be involved in making decisions. If you don’t already have a suitable group, you may need to put one in place. There might need to be a risk assessment to help you decide what to do if things don’t go to plan. There are some specific issues that you should consider in planning the evaluation. These include ethical approval, information sharing and consent.
Reporting – making sense of the evidenceOnce you have developed your theory of change, designed your evaluation and gathered the data to support it, it’s time to assemble the evidence. What do the results tell you about your theory of change? Do the results support each other or are there contradictions? There are no hard and fast rules for drawing the data together. Focus on ensuring that you have answered your evaluation question and presenting a clear and honest narrative about your programme’s impact. It’s really important to make clear any limitations in the evidence, for example if you have only been able to use a less robust design. ConclusionWe hope this guide helps your thinking on how to go about measuring impact and outcomes for personalised care. Here are a few final tips:
Appendix
DownloadsAll SCIE resources are free to download, however to access the following downloads you will need a free MySCIE account:
Available downloads:
What is the purpose of evaluating care in nursing?Evaluation is important in healthcare because it supports an evidence-based approach to practice delivery (Moule et al 2017). It is used to assist in judging how well something is working. It can inform decisions about the effectiveness of a service and what changes could be considered to improve service delivery.
What is evaluation of care?Health care evaluation is the critical assessment, through rigorous processes, of an aspect of healthcare to assess whether it fulfils its objectives. Aspects of healthcare which can be assessed include: Effectiveness – the benefits of healthcare measured by improvements in health.
What are the three main purposes of evaluation?In general, there are three reasons why evaluations are conducted: to determine plausibility, probability, or adequacy. As discussed in the Constraints on Evaluations module, resources for evaluations are limited and determining the reason for the evaluation can save both time and money in program budgets.
What is the purpose of evaluation in public health?The primary purposes of evaluation in public health education and promotion are to: (1) determine the effectiveness of a given intervention and/or (2) assess and improve the quality of the intervention.
|