Skip to main content

https://dataingovernment.blog.gov.uk/parliamentary-questions-tracker-tool-self-certification/

Parliamentary Questions Tracker Tool - Self Certification

Parliamentary Questions (PQs) are a mechanism by which MPs can ask questions of government departments on behalf of their constituents or for their own political needs. Within the Ministry of Justice, the distribution tracking and answering of PQs is managed by the Parliamentary Branch.

The bulk of PQs received are written and submitted to parliament daily, with roughly 100 received per week. They must be answered within an allocated time (2 - 10 days), which can vary depending on the type of question.

The primary users of the Parliamentary Questions Tracker service are the members of Parliamentary Branch administering the process. The service has a wider audience of “action officers” who receive the questions and author the responses. In total the service has approximately 400 users, of which 40 are frequent and active.

Department / Agency:
MOJ

Date of Original Assessment:
19/09/14

Date of Reassessment:
29/04/15

Assessment Stage:
Live

Result of Original Assessment:
Not passed

Result of Reassessment:
Pass

Lead Assessor:
G. Sheldrake (Original) / E. Fineberg (Reassessment)

Service Manager:
M. Madden (Original) / R. Waite (Reassessment)

Digital Leader:
M. Coats


Reassessment Report

29th April 2015

The service was assessed against all 26 points of the Digital by Default Service Standard.

Outcome of service assessment

After consideration the assessment panel has concluded the Parliamentary Questions (PQs) Tracker service has shown sufficient evidence of meeting the standard. This means the service can now remove its beta branding.

Reasons

Areas of good performance against the standard included the following.

Security, privacy, tools and standards

Since previous assessment the team have acted on recommendations to introduce an automated suite of tests and continuous integration processes. They now perform small releases (usually daily or more frequently) rather than big releases. Parliamentary Branch have access to the staging environment for approval of features etc.

The team has completely rebuilt the state machine logic internally, and has refactored the code base for legibility. The code now is well structured.

Test coverage has improved vastly from having no existing tests to around 80% coverage. The team has focused on user-centric feature tests. All new code developed is unit tested. The working practices of the team enable continuous delivery and improvement.

Design

The panel was impressed with the implementation of responsive design for an internally-facing application. This makes mobile working much easier. Analytics had shown that users accessed the tool via mobile browsers during the beta phase.

GDS design standards have been applied for the most part where appropriate.

A​nalysis and benchmarking

By tracking how quickly Parliamentary Questions are answered, the service is continually tracking Parliamentary Branch’s key performance indicator. This was a great addition as it tracks the business value of the tool in a clear and measurable way.

More detailed metrics are now also in place to measure where bottlenecks are occurring in the process. Focusing on these metrics will support more data-driven design decisions in future iterations.

Recommendations

User needs

The main focus of the of the work since the last assessment has been to stabilise the code, with user research and subsequent action on the findings considered to be of lower importance.

As such, the levels of research so far can be considered to be minimally adequate. The assessment panel recommends that future iterations focus much more on user needs and that only features in the backlog that are known to be of value are deployed.

User research has been adequate given the small size of primary user base (approximately 40 users). Efforts should be made to complete user research with those who are completely unfamiliar with the process. Doing so will help to ensure that the service is part of a wider process transformation, rather than simply an automation of current processes.

The team

From now on the team will depend on ad hoc design and research support rather than a dedicated designer and user researcher on the team. As such the panel recommends that this is planned and orchestrated with a documented research plan in order to ensure a genuinely well-thought-out outcome.

The panel was shown evidence that the product will be moved into the product team that supports and iterates services internal to MoJ. Efforts should be made to pair with this team so that knowledge is transferred in such a way that there are no gaps in the product management of the tool. This is especially important given the value of the features in the backlog.

Design

The design patterns remain consistent visually, functionally and at an interaction level with the version of the service previously assessed.

The panel recommends that outstanding design, UX and interaction improvements recommended at the last assessment should be made. There should be particular focus on improving:

  • early bird view
  • filtering
  • bulk actions
  • layout and navigational hierarchy

There are UI/UX design inconsistencies around Trim Link upload between the dashboard and PQ details pages that should be addressed. Navigation between the following pages should be clarified:

  • PQ dashboard to PQ detail page
  • PQ dashboard to address lists
  • PQ dashboard to report pages

A​nalysis and benchmarking

The panel recommends that effort is put into analysing user journeys through the application using web analytics packages. The panel appreciates that this is a difficult goal given the non-linear nature of the application. However, in the absence of high volumes of users to research, this quantitative data would highlight appropriate UI or design improvements.


Summary of Original Report

19th September 2014

The service was assessed against all 26 points of the Digital by Default Service Standard.

Outcome of service assessment

After completing the assessment of Parliamentary Questions (PQs) we have concluded that the service does not show sufficient evidence of meeting the standard. This means the service should not remove its beta branding.

Reasons

The assessment panel felt that the product team had not had the opportunity to go through alpha and beta assessment which would have helped them in preparation for ‘live’ assessment. Because of this some assumptions have been made in planning, external to the team, that meant the product owner and team were not empowered to sufficiently answer many of the assessment points.

Some successes of the project included:

  • Developing engagement with stakeholders by co-building the product. The team achieved getting a previously disengaged team to feed-in on weekly sprints.
  • The team delivered design changes in an agile way, feeding research into design iteratively.
  • The service challenge meant the team had to resolve design challenges outside of those provided by GDS and developed new patterns and elements.
  • The team demonstrated a good understanding of the service journey and its challenges and had mapped the service-end to-end.

Recommendations

Point 1

The team has suffered from not having a researcher. They were not able to demonstrate sufficient understanding of their secondary users or enough usability testing to back up their design decisions.

Points 2, 14, 19, 20

The product owner was not empowered to answer any questions on the future state of the product or the team. This affected the teams ability to sufficiently answer many questions on these points in the service assessment.

Point 13

Design developments on the elements or patterns were not sufficiently backed up by research or demonstrated as being actively shared, discussed or worked with other teams either MOJ / GDS. Because of this consistency and logic felt underdeveloped and could be an issue in handing over to a new team.

Points 21, 22

The team were not able to clearly demonstrate benchmarks for success as either completion or satisfaction.

  • The product owner was not empowered to get access to a full team or full phases of process. Discovery and alpha appear to be merged. Throughout the project various team members have been missing including a researcher, content designer and delivery manager. Going forward ensure the product owner can demonstrate that a full team will be available.
  • The product owner was not empowered to answer how the service will be developed and supported in the future as they have been told they are coming off the project. Ensure the product owner is empowered to answer how product developments can be enabled on an on-going basis.
  • Further research required to have a more complete understanding of users in particular secondary users’ needs. More user testing is required to build this understanding and the team will need a user researcher.
  • Plan to test the service with new primary users recruits prior to their onboarding to see if the service is intuitive to use.
  • Demonstrate design consistency with a logic for new elements backed up with usability testing on the product.
  • Demonstrate how the team has developed new patterns and elements. Show that they have worked with other similar MOJ / GDS products, shared learnings and are active on the hackpad and other cross government tools.
  • Plan for a baseline means for measuring success of the service. Monitor and show if PQs are answered on time (completion, progress rates, commissioning within the service) with a view on how this should be made visible.
  • Map the service change so that the benefit of change can be better understood and shared. Demonstrate the benefits that have been delivered from this.
  • Ensure the product owner is empowered to answer and can demonstrate a vision for the future of the product. What features are in the backlog? What are the benefits (user and business) these address? How will future service developments and proposals for Minimum Viable Products (MVPs) be made possible?

Summary

The panel were impressed by the service, the complexity of the challenge, and the level of quality at which it has been delivered in a short time. However, because many of the standard points could not be answered sufficiently, the product is not ready to move from beta to a ‘live’ service.


Digital by Default Service Standard criteria

Criteria Passed Criteria Passed
1 Yes 2 Yes
3 Yes 4 Yes
5 Yes 6 Yes
7 Yes 8 Yes
9 Yes 10 Yes
11 Yes 12 Yes
13 Yes 14 Yes
15 Yes 16 Yes
17 Yes 18 Yes
19 Yes 20 Yes
21 Yes 22 Yes
23 Yes 24 Yes
25 Yes 26 Yes