In the ‘first half’ – my first blog post on 5th February – I dived intentionally into my role as a Clinical Product Owner. I make no apology for continuing the footballing metaphor in this the ‘second half‘ to share my experience of designing a minimally viable service in the NHS.

Building a team

When I embarked on a discovery phase of work two years ago exploring patients’ experience of rheumatology out-patient services, I was alone. At least that is how it felt. There was a problem (not unique to my own rheumatology service): a problem of two halves.  I did not know how to solve it. What I did know was that it was not a job for a lone wolf or a few ‘ahead of the curve’ types.  Success would need to come from the sum of a team’s efforts (preferably a community) prepared to do things differently and keep doing things differently. This is the kind of thing that is best exemplified by the culture of Spanish giants FC Barcelona, known as the ‘Barcelona Way’: a collective of ‘cultural architects’ with a shared vision for a footballing ‘community culture’ which has led to decades of sustained success. This is in contrast to the ‘star culture’ of the ‘Galacticos’ at Spanish rivals Real Madrid, where the individuals were greater than the sum and success has been transient.

The problem of two halves (a quick reminder)

Patients with inflammatory arthritis, which make up the largest group of service-users in rheumatology, require long-term clinical and blood monitoring.  Data relating to disease activity and blood tests can frequently become de-synchronised with care planning due to the unpredictable and fluctuating nature of the disease and arbitrary appointment scheduling.  The ‘problem of two halves’ is identifying (or predicting) which patients could benefit from more flexible care whilst ensuring clinicians have access to data to inform the ‘#rightpatient #righttime’ care agenda.

Designing a minimally viable service (jumpers for goalposts)

The Design Council’s ‘double-diamond’ design methodology underpinned the user-centred design of a minimally viable remote monitoring service; minimally viable (like using jumpers for goalposts) to achieve the simplest sustainable change and to maximise learning.  The proposition was based upon two-way SMS communication with users through the exchange of patient-reported outcome measures (PROMs).  Assumptions and hypotheses were tested with paper prototypes and user research interviews in the waiting room. The team were able to iterate at pace and within weeks were able to ‘show the thing’ to user groups.  An SMS user interface (UI) and a remote monitoring platform, with rudimentary data displays with the ability to send, receive, remind and display messages as well as show trends in PROM data, were ready to be deployed.

Remote monitoring system on a mobile device

Developing the user interface

When good enough is enough

I recall the 16th of December 2018 as a defining moment.  The opportunity to develop a usable clinician interface for data collection (to replace an existing offline Access database) was met with competition. It was simply not a priority. The priorities of this service pivoted around meeting the needs of two user groups: patients and clinicians.  The highest value user stories – focused on optimising patient engagement with the SMS-based UI – were prioritised for delivery first. This was the first time as a CPO I had to say ‘no’ and really mean ‘no’.

So the remote monitoring (MVP) service went live on the 4th of January 2019 with the first 22 users. Incoming monthly data and unsolicited SMS messages were concierged for clinicians. The beta phase involved further iterative design and optimisation of features to support sustained engagement, now with 117 patients enrolled. This allowed the team to test, collect data and validate the proposition with users in real-time. The challenge was in keeping focus on ‘winning’ with patient engagement whilst risking ‘losing’ clinicians with the lack of interoperable data.  Whilst the team were guided by ‘What does good look like?’ the real question was: ‘When is good enough enough?’ Good enough had to be enough.

The post-match analysis

Rapid feedback from user groups was central to the design/test cycle.  Daily ‘stand-ups’ with the team and regular user interviews informed sprint planning and prioritisation of the backlog of features we might wish to develop (the subs bench). Show and Tell sessions with live demos provided the opportunity to re-engage clinicians when design priorities had shifted between user groups. Team retrospectives (post-match analyses) gave the team the opportunity to reflect on the ‘wins’ and ‘losses’.

We tackled many challenges in engaging users.  We drew on lessons from behavioural nudge theory around incentivising behaviours, including patient self-efficacy, collective value and social norms.  As a result we have seen sustained engagement and increasingly valuable two-way interactions with users.

Show and Tell session

Show and Tell session, December 2019

Making high value decisions (transfer deadline day)

As a CPO I have had to make many decisions; good decisions. Good decisions are decisions based on context and the vision for the work intended, and they are especially important when decisions are associated with the greatest degree of uncertainty or risk.  In contrast, indecision and bad decisions waste resources and risk losing momentum due to the slower pace with which they are often made.  No one intends to get it wrong but it happens. In the same way that product features are prioritised, decisions need prioritising for their importance. Getting decisions right when it is really important to is a priority over always being right. Information gathering is the foundation of good decision-making. So, deciding how much time to spend gathering information (on which to make a decision) is a decision in itself.

Consider the frenzied decision-making on transfer deadline day in English football. My evaluation of a decision to buy an often very expensive player is based on a prediction of a positive outcome for the club (through signing them). If the player scores 20+ goals in a season they are seen as invaluable: the decision is measured on impact (the team winning) not on cost per goal. A good decision right?  If that same player fails to deliver, the (lack of) impact rests with the individual who is regarded as an ‘expensive flop’. A bad decision right? Not all decisions are equal.

The final whistle

The truth is, in service design there is no final whistle.  It is an ongoing iterative process. In the case of the remote monitoring service we have been granted ‘extra time’ with an extended ‘proof of concept’ project to roll the service across the Integrated Care System (ICS) of South East London. Partnering with three NHS trusts across five hospital sites offers both a significant challenge and an opportunity to test the scalability and sustainability of a digital service which is dependent on the cultural and behaviour change of two user groups.  Articulating the potential impact of the service through a ‘theory of change’ model has helped inform the service evaluation supported by the Public Health England evaluation toolkit and the Health Innovation Network (HIN).

At the time of writing, against the background of the coronavirus pandemic, the future needs of users is ever more uncertain. We anticipate an unprecedented demand to support patients with long-term conditions remotely and to support care out of hospital.  There is a pressing responsibility to share the quick wins from a minimally viable NHS remote monitoring service to act fast to meet the current challenges facing the NHS. I hope to share more with you in the coming weeks.