Skip to main content

https://dataingovernment.blog.gov.uk/all-service-assessments-and-self-certification/hmrc/tax-credits-change-of-circumstances-beta/

Tax Credits Change of Circumstances - Beta Assessment

Tax Credits Change of Circumstances allows users to view and update the information used to calculate their tax credits.

Department / Agency:
HM Revenue & Customs (HMRC)

Assessment Date:
24 March 2015

Assessment stage:
Beta

Result of Assessment:
Not pass

Lead Assessor:
A. Lister

Service Manager:
J. Robertshaw

Digital Leader:
M. Dearnley


Assessment Report

The Tax Credit Change of Circumstances service is seeking permission to launch on a service.gov.uk domain as a Beta service.

Outcome of service assessment

After consideration we have concluded the Tax Credit Change of Circumstances service should not be given approval to launch on the service.gov.uk domain as a Beta service.

Reasons

While the service team have worked successfully in many areas, the service is not yet ready to pass a beta assessment and be made publicly available. It is a key requirement at a beta assessment to be able to show the service, this means that the assessment panel can make observations on the service which will be made public. The team were not able to show the complete end to end service at the assessment.

The HMRC team demonstrated a thorough understanding of the current service’s business processes and policy. The focus on operational requirements has meant that user needs have not been at the forefront of the service’s design, and the service is therefore not yet meeting user needs.

The points against which the service did not pass are:

User needs (points 1 and 20)

It’s important to be able to describe user needs in language that users themselves would understand. The service’s design focuses on current process and policy constraints which have not been challenged in order to develop a service which can demonstrably meet user needs.

The panel were concerned that in the user research there has been a degree of ‘testing for success’. Lab research has been scenario-based with users assuming fictitious identities to complete pre-determined tasks, rather than allow users to engage with the service as if using it ‘for real’. Findings have been captured in terms of user-preference rather than objective analysis. This means that the research is more likely to validate preconceptions, rather than being objective actionable insight which can be used to improve the service.

Testing the end to end service (point 17)

The team could not show the end to end service. This means that no end-to-end testing with a live or live-like service has been completed. At a beta assessment, the panel needs to see the actual service that will be made public on GOV.UK.

Lab usability testing was with the prototype, not the real service. Whilst this is appropriate early in development, there are differences between prototypes and production systems, and not testing the real service with users poses a high risk.

A simple and intuitive service where users succeed first time (point 9)

The completion rates in an earlier Beta service were high, however the elements of the service presented at assessment had significant usability issues. Examples of this included:

  • content which is awkward or uses jargon, like ‘Does this child have a contract to employment?’;
  • dead end routes from which the user can not return to the service;
  • and poor validation which often requires the user to enter data in a highly specific format, which is unnecessary. For example ‘eg if the total of all amounts shown on your P60 or P45 is £15,453.99, enter 15453’ - the user should be able to enter values as they are written.

The service currently has questions which are unnecessary for some users. A significant number of fields are optional and have hint text ‘Leave blank if...’ It would benefit from using appropriate techniques such as progressive disclosure to only ask necessary questions and guide the user.

The service captures information from the user on their change of circumstance. The Beta development has been used to move away from capturing freetext to capture structured data, but this work is not yet complete. There are still a number of areas where the service uses freetext to capture what should be structured data. This means that users are required to deal with complexity the service should be removing.

The service had been working with their content designer to ensure high accessibility. To validate that the service works for users with varied needs, the team should conduct research with users of varying accessibility needs, including lab usability sessions, and/or commission an external audit.

Recommendations to meet the criteria not passed

User needs must be explored objectively and in detail. With evidenced user needs, process and policy can be constructively challenged to create a service that’s right for users whilst being safe and secure.

The service, exactly as it would be if public, must be tested end-to-end with real users working with their own identities and details. This includes GOV.UK start and end pages. The assessment team needs to see the complete service and understand how it has performed with actual users.

To ensure that the service is simple and intuitive, the content, design and interactions need to align to GOV.UK standards and patterns, and the user-journey simplified as much as possible. Again, user research should take place to evaluate the effectiveness of the changes made.

Additional comments

There were a number of other points of the standard which the service passed, but on which the panel made observations, these are as follows:

The Team

The service has two fully staffed teams, each with its own product manager. The delivery methods are largely in line with those set out in the service manual.

The extracts of the service shown suggested that the features and components developed would be ‘tied together’ at a point closer to the date on which the service will be made public. This is a high risk approach which could cause significant problems.

It was also expressed that the team had been working with a Technical Architect but this was no longer the case. The Technical Architect role needs to be continuous throughout the delivery.

Security, Privacy, Tools and Standards

The team have clearly thought about security and privacy aspects of the service but the assessors had some concern about the approach to handling broken households. We recommend more exploration of how to meet user needs - and user safety - in that context and would look for a more robust approach to be developed during the Beta period.

We discussed security and privacy elements of the overall process and there is more work to be done as the current service is connected with the backend systems later in the year. In particular the team will need to look more closely at how authority is delegated within households and potential sensitivity there.

The team indicated that there is a legislative requirement to show people the full set of details on which their case would be determined and that that would be met in the next major iteration. Those requirements will need to be very clearly understood and, if genuine, will require careful design to protect users’ privacy and security.

Improving the service

Everything appears to be in place to improve the service and the team clearly has a good understanding of their relationship with the core HMRC Digital operations team and the division of responsibilities.

Design

The service generally looks like a GOV.UK service, however the delivery imperative meant that questions focused more on fitting with the current process’ data requirements than on helping users provide the information they needed to.

Assisted digital and channel shift

The approach to assisted digital is comprehensive and well considered. The team had responded to feedback from alpha and undertaken extensive research to identify user needs. This included speaking to assisted digital representatives across government, the contact centre lead for this service, relevant charities and focus groups. The recruitment company were unable to find assisted digital users with the lowest digital skills and access, so the user researcher took the initiative and went to places where she knew potential users would be. Based on findings from this user research, the team has developed a plan for testing a variety of proposed support in beta.

The team has a good strategy for digital take-up next year but will need to think beyond that for the Live assessment.

Analysis and benchmarking

The team has a dedicated analytics specialist and a clear understanding of analytics as a tool for measuring the success of the service. They had been measuring completion rate for some time and were aware of factors that affected this important metric on their service. They were also able to explain the value of other metrics such as user satisfaction and cost per transaction.

The team had a clear plan in place to deliver the 4 KPIs on a dashboard on the Performance Platform, and could comfortably talk through the specific details of delivering each metric.


Digital by Default Service Standard criteria

Criteria Passed Criteria Passed
1 No 2 Yes
3 Yes 4 Yes
5 Yes 6 Yes
7 Yes 8 Yes
9 No 10 Yes
11 Yes 12 Yes
13 Yes 14 Yes
15 Yes 16 Yes
17 Yes 18 Yes
19 Yes 20 Yes
21 Yes 22 Yes
23 Yes 24 Yes
25 Yes 26 Yes