Five Ways Digital Health Developers Can Save Time and Money
Public Health England recently updated its evaluation guidance for developers of digital products in the UK’s £5 billion IT and digital healthcare market. MedCity CEO Neelam Patel was part of the project advisory board, working alongside guidance co-author Henry Potts, a professor at the UCL Institute of Health Informatics.
Here, in the first of our guest blogs on navigating digital health-tech regulations and evaluation, Professor Potts shares his top five tips on designing for smarter evaluation.
Designing for evaluation: a new approach for developers of digital health products
By Professor Henry Potts, UCL Institute of Health Informatics
There is growing recognition that we need more and better evaluation of digital health products and services, but equally that the time and costs of evaluation can be high. There is a mismatch between the fast-paced, iterative development of digital and the expectations of evidence-based medicine. The latter’s gold standard, the randomised controlled trial, can be especially difficult to achieve.
However, digital can also make evaluation and research easier. Apps promise the automation of research activities, big data, ease of contacting the user, and ecologically valid assessments. Yet my own experience has been that such promises are often not delivered:
Data available is limited or difficult to interpret. Attempts to collect new data or run studies are laborious. Time spent processing data or re-coding software costs time and money.
You can save time and money by designing for evaluation. Here are five suggestions to help developers make their digital health products easier to evaluate. I’ve focused on smartphone apps, but also considered other digital health products, like websites.
1. Collect and store the right data
Think what data you may want for evaluation purposes and make sure it is recorded and stored appropriately. Be wary of relying on third party providers to collect, store or process your data. They can easily change what data is available.
You probably want both outcome data (e.g. self-reported behaviour or a test score) and activity data (how the app was used). A theory of action or logic model can help you work out what data you need to collect.
Researchers love data, but you also need to consider possible limitations:
- to data collection (often through energy drain on mobile devices)
- to storage and transmission (from a phone to a server)
- because of data protection rules
Consider what it might mean if you have missing data or data recording no action. Can you tell the difference between a person not using the app, not recording something, or not doing a behaviour?
Do you have a consistent personal identifier? Challenges can arise if users delete and re-download an app, or change phone.
2. Make it easy to ask users questions
You can collect a lot of passive data through digital products, but there are often times when you need to actively collect data by asking users questions.
Consider how you will present questions to the user. If there are existing pages in the app that ask questions, can the questions be changed easily without going back to a coder? Can you ask questions through an app alert? With websites, can you present a survey when the user leaves the site?
Can you (and do you have permission to) contact the user outside of the product, e.g. by sending an email? Do you have users’ email addresses?
3. Be consistent. If you can’t, record changes.
Digital products evolve, often rapidly. This can be a good thing, improving the product and improving data collection. However, if you change a question so it is asked in a different way, this can bias the answers. Try to keep data for evaluation purposes recorded in a consistent way. At times, it is worth sacrificing improvements to achieve comparability.
Sometimes, change is inevitable. But make sure these changes are recorded. It is important that developers make teams aware of any changes that will impact on an evaluation, e.g. in how data is recorded.
Where possible, use existing standardised outcome measures that others in the community use.
4. Support randomisation
Random allocation is a central concept in research. It involves allocating participants to different groups, receiving different interventions or different versions of an intervention, at random. This makes sure that the participants in each group are, on average, the same. It removes any bias in who is in which group. Taking the first half of users or every other user can look like a simple solution to the busy coder. Neither of these is actually randomisation. You need to use random numbers.
Randomisation doesn’t only mean dividing users with a simple 1:1 split. You may want an unequal number of participants in the different groups. Or you may want random allocation over a larger number of options. Write any code so it is flexible enough to cope with these different scenarios.
Randomisation can happen at many different levels. It might happen when a user first downloads and starts to use an app. Or we also have micro-randomisation trials, where, for example, you can randomise where a user receives an alert at a particular time or what the alert looks like. Think how you can build in randomisation options to your code.
5. Get informed consent
Evaluations should be conducted in an ethical and legal way You will need to consider your local regulations and governance.
This blog encourages you to collect more data, but this should still be proportionate. GDPR requires that data be “collected for specified, explicit and legitimate purposes”.
If an evaluation is affecting the user’s experience of health care, which is always the case if you are using randomisation, there should be informed consent. Consider how and when you will inform users about a trial and what mechanisms you will use to get consent.
Think when it is practical to get users’ consent upfront, before any specific evaluation exercise – for example, consent to be randomised or to be contacted for a survey.
Find out more
With good design, some kinds of evaluation, including randomised controlled trials, become quicker and cheaper. This advice will help you make sure you have the right data accessible, and make comparative trials, such as randomised controlled trials, easier to run.
You can read more about how to evaluate digital health products at the Evaluating Digital Health Products website created by Public Health England.
Photo by Rodion Kutsaev on Unsplash