Government Digital Service (GDS)
Printable version

How the Ministry of Justice explored using a chatbot

How the Ministry of Justice (MoJ) explored and compared users’ experience of website content to using a chatbot.

Summary

MoJ:

  • ran their project from June to August 2018 with an iteration extending into September
  • had a team consisting of a service designer, content designer, user researcher and developer
  • trialed the Landbot tool for their chatbot

Objectives

The goal was to run a proof of concept project on the Child Arrangements Information Tool (CAIT), to understand how users would interact with content delivered through the chatbot. The team wanted to know whether separated parents are more likely to engage with content delivered by a chatbot rather than flat content. They also wanted to know which method was easiest to use and provided the most flexibility. This project would be part of a larger project around the CAIT.

The department

MoJ is responsible for the courts, prisons, probation services and attendance centres. The User Centred Policy Design team ran the chatbot project, alongside other proof of concepts with video and audio formats. They work with policy and operational teams to:

  • be more user focused
  • test ideas early
  • make greater use of digital and design methods

Experimenting to understand user needs

The team started this project because of research which found that:

  • most of their users only interact with the justice system when in a moment of crisis and might behave differently under stressful situations
  • users in the moment of crisis tend to struggle reading and understanding text-heavy flat content
  • users are looking for help elsewhere, for example, forums or websites where the interaction is more dynamic and conversational

The team also looked at service data and how other departments provided evidence of users accessing similar data.

Testing assumptions

The team found there was a lot of content available online but it was not engaging enough for some users. So the team set out a proof of concept to test their assumptions and to better understand what users need.

“Our hypothesis was that if parents ‘in crisis’ receive information in a different format, they would be more likely to engage with the content and find relevant information to solve their issues.” - Service Designer

The team wrote 5 statements to test about user interaction with their content:

  1. The chatbot will receive initial interaction from CAIT users.
  2. The chatbot will receive significant interaction from CAIT users (after initial interaction).
  3. The chatbot will have higher referral rates to support services, when compared to static page content.
  4. Interacting with the chatbot will increase users’ awareness of alternatives to court
  5. Users will rate the chatbot as helpful or very helpful.

The team then assigned a metric for each statement. For example, by looking at retention and completion rates to measure if users have had significant interaction with the chatbot. Each metric had a target, which the team selected based on a benchmark and research. They used this target to measure the success of the results.

Setting up the chatbot

The service team ran the project in 3 phases - planning, implementation and evaluation. They worked closely with the policy team to make sure the chatbot met policy and legal requirements.

As this was a quick, proof of concept project, the team chose to use Landbot rather than developing their own tool. The team did not need to invest in training as they could learn about the tool by using it.

The chatbot they created used a closed script which included a number of user journeys depending on what path users selected. The content designer wrote the child arrangement guidance based on current website information. They worked with their provider to make the chatbot look similar to the GOV.UK Design System.

Carrying out the project

During the main sprint, the team embedded the chatbot on the homepage of the service. It was available to the general public for 12 days and during that time 1,121 users visited the website (125 unique visitors per day) and 26% of these users initiated a conversation with the bot, with 15% of this group continuing their conversation after their initial interactions.

The team also ran 5 further face-to-face usability sessions in a lab environment to get qualitative feedback on the experiment. The team considered doing A/B testing, but the service did not have enough users visiting the site at the time the experiments were carried out.

During the first iteration the team collected feedback from users with a short questionnaire and 2 simple follow-up questions on the chatbot to rate their experience. The questions were “was this conversation useful?” and “do you know more about your options now?”.

Challenges and obstacles

The first version was a closed bot and during user testing sessions, users quickly understood that they were in a closed loop of information. In the second iteration, the team added more content and allowed users greater input.

During the first iteration, some users did not get as far as the feedback questions if they were given a link to another website as part of their journey. To resolve this in the second iteration, the team embedded the feedback in the conversation flow.

The users needed tailored content, and an automated chatbot cannot provide that. The scripted chatbot felt restrictive and not intelligent enough to provide users with the content they needed. A potential solution to this was to provide a webchat or a more intelligent chatbot with machine learning.

With the second iteration of their chatbot, the team found flaws in the analysis provided by Landbot and ended up manually analysing the data.

Due to a resource issue, there was no research testing for the second iteration of the chatbot and the team could not directly compare the first and second iteration.

Results

The participating users gave positive feedback, and were pleasantly surprised to see the government try something new. The project allowed the team to sense check their theories in an agile way to better understand and help users. They found that users are willing to engage with content delivered through chatbots, although the team needs to do more testing to make sure the users find the support they are looking for.

At the end of this project the team found:

  • the users of this service wanted tailored content
  • users were more likely to find and engage with pages accessed through the chatbot than through the normal website
  • the number of referrals improved when using the chatbot compared to referrals going through the usual process, for example the number of referrals going to the three external websites used by the service tripled (3 to 55) during the first iteration

One of the major benefits of the project is that the policy team is more open to alternative solutions and that testing solutions before using them is essential.

Future plans

The team would like to try a minimum viable product of a webchat, possibly using Landbot which has a webchat feature. The team would also like a subject matter expert to provide content and answer questions.

The team is also considering using a structured menu so that users can navigate through the content and onto the webchat. With a structured menu, the team can test a closed bot, a semi-open bot and webchat.

 

Channel website: https://gds.blog.gov.uk/

Original article link: https://www.gov.uk/government/case-studies/how-the-ministry-of-justice-explored-using-a-chatbot

Share this article

Latest News from
Government Digital Service (GDS)