Designing for engagement – lessons from digital health and behaviour change

By Catarina Nyberg | 09/03/2016

Digital technologies are revolutionising the area of health care and behaviour change. There are thousands of mobile health solutions available on a market estimated at $14 billion in 2015 (Journal of mHealth, 2016: 3:1), e.g. apps to help you stop smoking, stop drinking or get more active. They are said to be cost-effective, wide-reaching and still have the potential of being tailored to individual needs. But do we know if they actually work? As it turns out - not really. Very little is known about how effective digital technology and interventions are.

Human behaviour and interaction is very complex and requires many different perspectives and levels of inquiry - societal, intergroup and individual - to form any sort of understanding. The Centre for Behaviour Change, at University College London, is a recent interdisciplinary initiative, made up of behavioural science, computer science, engineering and human-computer interaction. In 2015, they launched their yearly conference on digital health and behaviour change bringing together both academia and organisations from the commercial, government and NGO sectors. What follows are some insights from the 2015 and 2016 conferences on challenges and successes of digital health interventions.

What are the challenges? Data entry, measurements, interactions

There is a great need for evaluating digital health interventions. Data will tell us how many people have downloaded a certain app (e.g. to stop smoking) but we don’t know how many have actually stopped smoking. There is a general lack of using previous knowledge when developing new digital interventions, and once (or even if) a trial is done, the technology is not being used. Losing 80% of users is normal for apps, said David Crane (UCL). A review of interventions showed that a third had no or a negative effect, according to Jeremy Wyatt (University of Leeds).

Data entry is proving to be a tough challenge for digital systems and apps. How can we measure things in a way that captures meaningful data? Data that individuals enter themselves, e.g. food intake, has proven unreliable, and devices that record data automatically, e.g. a walking stick that counts steps, can be used by someone other than the intended participant (such as bribing your grandson). This was clearly demonstrated in a short film about how people under health surveillance “game the system” (https://vimeo.com/128873380), shown by Cecily Morrison (Microsoft). Another problem is that people don’t tend to interact with apps when they don’t feel well, and negative feedback does not encourage continued use. Rewards don’t work and can sometimes have a negative effect on engagement. Katarzyna Stawarz (UCL) pointed out that not much is known about how cues work. Personalisation is difficult to do in a meaningful way, and the use of gamification is often shallow. People with a long-term illness do not want to constantly be reminded of their condition by an app. The important point was made that people don’t normally go out looking for health improvements, instead we need to capture people at the right moment, and, we need to be mindful that technology is not always the answer.

What are the successes? Building on users’ context and needs, identity and social interaction

A number of successful apps were shared, including Flowy (for panic attacks), Re-mission (for teenagers going through cancer treatment), Smoke-Free (for quitting smoking) and Emotion sense (for research on measuring behaviour with interactive devices). So what seems to facilitate the success of digital health interventions? First of all the intervention or technology must not be an optional extra, but be part of a wider health approach. The technology needs to fit into the user’s social context and build on what people already do and know. Understanding the user’s context, needs and goals were a strong theme among the successful examples. The technology must meet users at the right level, not too easy nor too difficult. There should always be support, a system should never just launch and then leave people alone. Susan Michie (UCL) pointed out that for digital interventions to be successful the incentives for patients and staff need to be aligned, routines must not be disrupted and the technology must save time.

Goal setting seemed important, if for nothing else other than to create awareness. Instructions on how to perform an activity (e.g. use of inhalers) have led to better and more effective use. When the change of behaviour is about giving something up, such as smoking, the “loss” needs to be turned into a “gain”, e.g. better health. Crucial for successful health behaviour change is that relapse must be taken into account as part of the process, as behaviour change is not linear, but highly dynamic. Users need to be able to see the meaningfulness in using an app or engaging in an intervention. Connecting with a person’s self-identity (“I am a non-smoker”) encourages engagement, as does the feeling of being part of a social community and social interaction. Successful measuring of engagement needs to involve a range of different methods, including self-reports, observations and analytics, as shown by Mounia Lalmas (Yahoo!).

Behavioural Science for design

The behaviour change wheel – a guide to designing intervention” is a tool developed by Michie et al (2014) at UCL for designing and evaluating behaviour change interventions.  The COM-B model takes into account “capability” (knowledge, skills, stamina) “opportunity” (physical, social, political, cultural), and motivation (automatic or self-reflective) and how all this results in engagement (or not) in a behaviour.  Different theories can be applied together with this model. One theory that has proven popular in the design industry is the Self-Determination Theory (Deci & Ryan 1985; 2008), as demonstrated by David Farrell (Caledonian University) and Dustin DiTommaso (MadPow). This theory describes the conditions needed for motivation to engage in a behaviour. Three crucial conditions are 1. autonomy - not feeling forced and having enough information and choices; 2. competence - feeling something isn’t too difficult (nor too easy) and the potential to progress; 3. relatedness - feeling part of a community, social interaction and collaboration.

User experience research leading in digital health

The health sciences are by tradition dominated by “hard” science subjects which rely almost exclusively on quantitative data. One of the challenges facing the traditional health sciences when evaluating interventions is the need to perform randomised control trials in order to secure funding. However, a conclusion from the 2015 conference was that it is nearly impossible to do RCT on interventions, and a better alternative might be to compare interventions instead.

The success stories at the conferences clearly point to the importance of solid qualitative research by capturing the users’ context, needs and goals. Lucy Yardley (University of Southampton) described what she called the Person-based approach to digital interventions as “the understanding and accommodation of perspectives of the people who will use the intervention, in order to improve uptake, adherence and outcomes” and that the “key is understanding context-specific behavioural issues”. An agile, iterative approach to building digital interventions was also shown to be successful, exemplified by the work on StopAdvisor (Gieta Ellul, Public Health England) (for quitting smoking). However, the qualitative and iterative approaches were underrepresented at the conferences. The role and value of qualitative data were questioned in panel discussions, and there was some reluctance to fully embracing an interdisciplinary approach including both quantitative and qualitative methods.  However, this is what good user experience research does really well and, therefore, is on its way to taking a lead in the area of digital health research and development. Key to designing for engagement and behaviour change is not only building on an interdisciplinary approach, including a mix of quantitative and qualitative methods and allowing for an iterative process. It is also about getting the research strategy right, and knowing when to use which method to yield the best result.