Skip to main content

https://dataingovernment.blog.gov.uk/2017/03/20/meeting-user-needs-how-we-receive-and-share-service-data/

Meeting user needs: how we receive and share service data

Posted by: and , Posted on: - Categories: Data insights, Performance Platform

Tingting presenting findings from user research into the Cross Government Service Data proposition

We’re user researchers on the Performance Platform team and we’ve been working on the team’s alpha - called Cross-Government Service Data.

Cross-Government Service Data aims to give people standardised high-level metrics about all government services in one place.

Our work has focused on understanding how best to give our users the data about services they need. And how best to work with the people across government who would provide that data to us.

Here’s what we’ve found:

What our users need

After our discovery research, we identified our core users as people who work with multiple service delivery organisations across government. These are people who need to know and influence where the focus should be in transforming government services.

When we carried out research with these people, we found that:

  • they need a high-level overview of services across government and how they are related: this means we need to give appropriate metrics and we need to provide data that shows the relationship between different parts of government service delivery
  • they need to find, understand and trust the data quickly: this means that not only do we have to be clear about what each metric means, but we also have to help users understand the quality of the data
  • they need to understand the progress organisations are making: this means we have to give our users easy access to consistent historical data to understand trends

What we’re doing for our users

We spent quite a lot time looking at alternative metrics to show the effectiveness of services. The ‘user satisfaction’ metric that is currently reported on the Performance Platform has several issues.

As an alternative, we explored the idea of value and unnecessary demand based on the concept of failure demand. This is defined by occupational psychologist John Seddon as ‘demand caused by a failure to do something or do something right for the customer'. We worked with service teams to look at ways we could express failure demand. We found that this was very challenging - as we’ll explain later in this blog post.

We also looked at different ways of defining when and how transactions are completed, because this could be a useful indicator for the quality of services. After testing with our users, we settled on three metrics to show the effectiveness of services:

  • transactions ending with the user’s intended outcome
  • the main reasons that people contact call centres
  • the main reasons applications couldn’t be processed at the first attempt

We’re looking at groupings that best show the relationships involved in government service delivery. The groupings we’re exploring are:

  • data based on service themes or user types
  • data from services and organisations that are dependent upon each other
  • data based on shared policy intent
  • data from services using common components, such as GOV.UK Verify

And we’re exploring how the Cross-Government Service Data interface can best support the four ways we found our users interact with data. These are:

  • surfacing: users use a criteria-driven approach to draw out patterns and insights from the data quickly
  • drilling down: users investigate the data from the high-level to lower levels
  • quick find: users need data about a specific department, agency or service
  • deep dive: users need downloadable raw data to explore it fully, combine it with other data sources and produce reports

We’re also working to give users the context and information they need to understand the data. This is information like how we collected the data, how complete it is and where it comes from, and information about the service.

What the people who give us data need

The data we will show on Cross-Government Service Data will come from people across government. The value of our product depends on the quality of the data that is supplied to it.

This means that as well as looking at the needs of the people who will use our data, we also had to look at the needs of the people who will give us that data, to make things as easy as possible for them.

Here’s what we found about what the data suppliers need:

  • They need us to understand the context they work in: they may have legacy systems, have to upload data manually or rely on others to provide the data, and the data captured can vary from service to service
  • They need to feel confident showing their data on our site: we found that some service teams were concerned they would be unfairly judged if they opened up their data, so we need to make sure we’re showing their data in a fair and comprehensive way

What we’re doing for the people who give us the data

As we previously said, we explored the concept of failure demand. But we found that it was hard for the data providers to give data that showed this.

We visited contact centres to understand information they are capturing, and then we ran card-sorting exercises asking service teams to look at the reasons people contact service call centres and the reasons that casework cannot be processed at the first time. We asked the teams to group these into ‘valuable’ and ‘unnecessary’. We found that deciding whether something is valuable or not is subjective - the same item could be considered to be valuable by one service but not by the other.

So we’re focusing on collecting and displaying data showing the reasons people contact a service and reasons that casework cannot be processed, based on what we found in these card-sorting exercises.

Services are already tracking completion rate on the existing Performance Platform, but this only shows completion rates for submissions that both start and end online. From our discovery, we found it’s important to help people understand the whole journey, not just the online part. We’ve been working with service teams to look at different ways to help our users understand the whole journey.

We’re looking at two new metrics:

  • ‘transactions ending in an outcome’: this is where a transaction is finalised in one way or another - for example if a user is paying money to a prisoner, that money would either be paid to the prisoner or the payment would be rejected, either way the transaction has ended
  • ‘Transactions ending in user’s intended outcome’: This where the outcome is what the user set out to achieve - for example successfully paying money to a prisoner

We are continuing to work closely with data suppliers as we want make it easier to provide data to us. Doing this will enable us to better meet the needs of our data users and ensure the success of Cross-Government Service Data.

And while we move towards that we’re continuing to work with them to provide data to the current Performance Platform using the existing KPIs.

Follow GDS on Twitter, and don’t forget to sign up for email alerts.

Sharing and comments

Share this page

1 comment

  1. Comment by Ange posted on

    It's always the customer (not the service) that defines value, which would explain why you found the failure demand exercise so subjective. And the customer isn't always the person carrying out the transaction.

    You did the right thing by going to the Contact Centres but that's where you should study demand, in its raw form. As soon as you take the details away from that context, and ask services to assess them, then you loose the meaning, as the services will (as you seem to have found) apply their own (rather than their customers') thinking and constraints to what they see - they'll try to rationalise it against what they believe to be true, which can be very different to what the customer actually wants and needs. Accepting that the Customer, and not the Service, is the expert can be a big leap for many!

    Your "transactions ending in user’s intended outcome" measure is on the right track - user's intended outcome 'should' equate to value, but you'll need to carefully define each intended outcome for each service by studying demand for each one, and by stating it in your customers' terms, not those of the service.

    So, for example, "successfully paying money to a prisoner" may not define the outcome well enough from the customers' point of view ... are they, for example, also concerned about how quickly the prisoner can access the money? If so, then getting it paid but too slowly will result in Failure Demand ("when will they get the money I've paid?") - it might help to think of the Prisoner and not the payer as the ultimate customer here.