Our ‘effective delivery’ principle in action: Updating a scientific e-licensing service

29 Sep 2022 11:41 AM

Blog posted by: , 29 September 2022 – Categories: DDaT StrategyDelivery.

An image of two colleagues in front of a whiteboard having a discussion

‘Effective delivery’ is a principle in our Digital, Data and Technology (DDaT) Strategy 2024. To be effective at delivery, teams need to be led by user research and use commonly agreed standards.

To deliver effectively teams follow a process of iteration, involving close collaboration with stakeholders, and regularly share progress with the rest of the organisation. Teams whose leaders trust them to deliver are empowered to work with autonomy and are more effective as a result.

We want to share an example of effective delivery in practice - replacing one of our scientific e-licensing services.

Taking a service design approach to our e-licensing service

Scientists often need to do sensitive research to help them find vaccines and developing treatments for cancer, for example.

To do research, scientists need a licence for themselves and their project, and they need to carry out the work at a licensed establishment. Licence applications are rigorously assessed to ensure research programmes are compliant with the law and that the benefits outweigh the harm caused to the research subjects.

When applying for a licence, scientists need to draft, save, amend, view and, in some circumstances, get their applications endorsed.

In 2017, our product team was tasked with replacing one of our services.

Faced with a poorly performing service, a frustrated user base and a hard deadline, our team chose to take a service design approach. We spent time understanding the needs of users rather than building a direct replacement based on programme requirements.

Learning from user research to improve service design

To understand the needs of users, our team did comprehensive user research during discovery. We used the discovery findings to start solving the service’s biggest problems, such as:

We then designed, tested and iterated different elements of the service to fix these problems, using patterns and components from GOV.UK’s Design System, the Home Office Design System and the Technology Code of Practice. This included a redesigned application form, a single user account to access multiple establishments, and the ability to share licences with colleagues.

In total we held over 500 research sessions, engaging with 61% of establishments across the UK.

Communicating realistic timelines to stakeholders

Our team regularly communicated to stakeholders ahead of the service launch to manage expectations.

To ensure deadlines were met, our team estimated the amount of work we needed to do and came up with a detailed project plan. This helped us manage the scope of the project to meet the most important needs of our users ahead of our delivery deadline.

We were open with our stakeholders - discussing our roadmap, sharing progress at regular show and tells and engaging users in design workshops.

Leading up to our delivery deadline we were realistic about what we could achieve. We kept our stakeholders informed of the risks of including too much too soon in the project as this could impact on the integrity and sustainability of the whole service.

Instead, we stayed focused on building a basic, working service that we could continuously develop.

Having the trust of senior leaders was critical. We were clear about the impact any demands might have on our delivery and openly discussed technical constraints.

Scaling the team for different stages of the project

Our multi-disciplinary team continuously adapted its size depending on the stage of the project.

During discovery, our team was deliberately small with a product lead and a user researcher. Designers, developers, two researchers, a technical lead and a delivery manager then joined the team as we began to undertake intensive research and design sprints.

Iterating based on feedback and research

All the organisations were moved on to the new service in August 2019.

After the initial service release, our team focused on fixing bugs. In the first few months, we released changes on average 30 times per month.

Our team then started adding more functionality, prioritising based on what would deliver the most value. We continue to iterate and improve based on feedback and data.

Our backlog is formed from:

The service is measured against several key performance indicators, which shows the progress we’ve made since it was updated. In a recent survey, 79% of users said they were satisfied with the service, a rise from 47% of users in 2018.

Continuous testing is essential for our user experience

We can rapidly release frequent updates to the service due to our use of automated testing and fast deployment pipelines.

For each release, we run around 400 small integration tests, where different components of the service are tested as a group to make sure they work together as expected. This is followed by testing end-to-end user journeys.

Automated system alerts tell us if issues occur in the live service. This includes regression tests that run overnight to alert us to any breakages.

Redesigning, not just replacing, a service

By redesigning the service, rather than replacing it, we’ve delivered an improved service on time and to budget. Now it is in public beta and meets all 14 points of the Service Standard.

Read more about how we're implementing the principles of the DDaT Strategy in the links below.

You may also be interested in: