The 5-year beta phase-How we prepared for live assessment
2 Feb 2021 09:47 PM
Blog posted by: Sharondeep Shergill, Alex Saunders, Jim Warren and Uchenna Ndikom, 02 February 2021 – Categories: live assessment, Our services.
We’re part of the Office of the Public Guardian (OPG) Digital team. Find out how we recently took one of OPG’s services through a live service assessment.
When someone loses the mental capacity to make their own decisions, the court can appoint a ‘deputy’ to make key decisions about finances, property, or healthcare on that person’s behalf.
To help OPG ensure deputies make decisions in the person’s best interests, we ask all deputies to complete a report each year, detailing the decisions they’ve made. They do this using the Complete the deputy report service.
Let us rewind…
The service went through a beta assessment back in February 2016. From there we launched the service to our first user group: ‘lay deputies’. They’re people who are friends and family members of the person who has lost mental capacity. Since then the service has been rolled out to two further user groups, ‘public authority deputies’, such as local councils, and ‘professional deputies’, such as solicitors. We’ve also extended our service so OPG staff can use it to process deputy reports once they’re received.
How do you know you're ready for live?
Whilst there are still more features and functionality to add to the service, we knew we were ready to move to live, because we felt the service had achieved what it had originally set out to do. We found referring back to our product vision to be helpful here. If your service has realised its original vision, then it’s likely you are ready to move on. For us, that meant we’d built a service that was “an easy-to-use online platform for deputies to find guidance, submit annual reports and provide further evidence to OPG,” and that “provides data to OPG in a format that is efficient to analyse to better support deputies and safeguard clients”.
Thinking ahead to the future of the service in live, we knew there would be a focus on the following things more so than in previous assessments:
- how the service’s ongoing value is measured and what performance metrics are captured
- how will the team be sustainably funded going forwards
- the onboarding process for new joiners and how it supports team continuity
Preparation, preparation, preparation!
In order to start thinking about the live assessment process, we had a virtual team kick-off. As part of this session, we went through each service standard.
We discussed the evidence we could provide to support it, assigned an owner who would be responsible for collating the evidence and identified any outstanding work required to support the standard. This step was invaluable as it allowed us to treat the assessment like any other epic, including scoping out and pointing tickets so we had a rough idea on when we would be ready.
Reach out to your local lead assessor if you have one, or the GDS service assessment team for a rough guide on the technical questions asked.
Preparing for the tech pre-call involved revisiting our beta assessment so we could document the progress we’d made over the years and revisiting the reasons behind technology choices. We discussed with our local lead assessor likely questions that would be included in the call and collaborated on talking points around the subjects in a shared document.
We had some preconceived notions on the level of detail the pre-call would go into regarding specific implementation details of our service - mainly that there would be specific questions on pieces of code - but these were unfounded. The focus at live assessment was ensuring our service was:
- straightforward to deploy
- well tested
- recoverable from a disaster
We proved we met the above points by talking through our ways of working, showing various diagrams of the steps involved to make a release and providing anecdotes around when we’ve had to recover from disaster (the sweat inducing time we dropped the live database for example!).
We also had to prove that our service would be supported in the long term after the assessment was completed. This involved talking through our long-term road map and the future funding that was in place for our team and hosting costs.
It’s worth looking at any areas that require work well in advance of the tech call and trying to either fix, mitigate or detail future features required to plug any gaps.
- provide a glossary of commonly used terms that may be well known in your domain area but less clear to the assessment panel
- have good examples and anecdotes to share that show real-world examples of what happens with your service
- have maps illustrating how things work (at a high service level and detailed path to live)
And for remote assessments specifically:
- have an agreed order of team intros - it saves you from the inevitable awkwardness of who goes next!
- ask for breaks as and when you need them so you can stretch your legs, get a break from the screen, or grab a cuppa!
- give whoever is sharing slides the slide number you are referring to
If we were going through this process again for a different service, we would definitely keep a living document that evidences how we are meeting service standards as we go. Doing this as you go, means decisions can be documented while they’re still fresh in everyone’s mind. This will also help to reduce the amount of preparation work needed to put the assessment slide deck together.
We’d also say to make sure that you’re building to the service standards as part of your routine planning. For example, when you’re next creating epics or features as part of an epic kick-off, ask yourself how your approach links back to the service standard.
If you have any questions or would like to know more, leave us a comment below.
Being friendly and helpful are continual themes of working here…