Skip to main content

https://insidegovuk.blog.gov.uk/2024/01/18/experimenting-with-how-generative-ai-could-help-gov-uk-users/

Experimenting with how generative AI could help GOV.UK users

Posted by: , Posted on: - Categories: Data, Vision and plans, What we're working on

An image titled GOV.UK AI Experiments, showing a selection of exploratory icons for new product propositions for GOV.UKRecently, there’s been a lot of discussion about artificial intelligence (AI). In November, the Prime Minister held the AI Safety Summit with the world-first Bletchley Declaration, and shortly after there were further international agreements on AI knowledge sharing. The AI Safety Institute has also been established to focus on advanced AI safety for the public interest. So, it’s clear the spotlight is on the opportunities and risks AI presents to governments — and what can be done collectively to use AI for good.

At the Government Digital Service (GDS), we’ve been thinking hard about how we can use generative AI and large language model (LLM) technologies to improve the user experience of GOV.UK. This builds on a decade of innovating with new technologies, including AI and machine learning, at GOV.UK. And it helps directly contribute to GDS’s mission of improving and safeguarding the user experience of digital government.

We believe that there is potential for this technology to have a major, and positive, impact on how people use GOV.UK - for instance making it easier to find answers to their questions from the 700,000+ page estate of GOV.UK. However, we also know, as with all new technology, that the government has a duty to make sure it’s used responsibly, and this duty is one that we do not take lightly.

To make sure we were both investigating new technologies, while being cognisant to the risks, we decided the best way to understand how the new technology can deliver value is through real experiments, starting small and scaling incrementally. We’ve set up the GOV.UK AI Team, which brings together a multidisciplinary team to design, build and run a series of experiments using AI that can be tested with a variety of different users.

The first of our generative AI experiments

The first of these experiments was to see if a LLM-powered chatbot can reduce complexity, save people time and make interactions with government simpler, faster and easier. The chatbot responds to user questions in the style of GOV.UK, based only on published information on the site.

Following initial testing, with positive results, late last year we scaled up testing to 1,000 invited users - so we could continue to evaluate, iterate and improve. Watch the video to see a demo and hear from the team, and read our blog post detailing our approach and findings.

As with all our work we’re committed to protecting people’s privacy and security, particularly with this new technology. We will always uphold our high standards when it comes to data protection, following privacy by design and data minimisation principles. We do this by methods that include removing GOV.UK pages with personal data from the tool; limiting the tool to invited users; instructing these testers not to input personal data; and screening inputs for personal data.

For our upcoming experiments, we are set to explore and evaluate ideas contributed by colleagues, users and various government departments. We are continuing to develop the accuracy of the chat while partnering with other government organisations, aiming to establish rapid feedback loops to assess and iterate going forward.

Innovation and experimentation

Innovation is not new for GDS. The creation of GOV.UK itself was an innovation over a decade ago - bringing together nearly 2,000 government websites into a single home for the UK government online. And since then the teams across GOV.UK have sought to take advantage of AI technologies to keep up with changing user expectations. For instance, we used algorithms for related links, machine learning to increase accessibility of the site and innovative data analysis during COVID-19.

As part of this culture, we have been given the space to experiment in a controlled environment. Building fast, and iterating quickly, means data and feedback can be rapid - and easier to scale.

Sharing our learnings

This is a new technology that brings a new set of risks that need managing. We’re working closely with colleagues across government, particularly in the Central Digital and Data Office (CDDO) and No.10, to ensure our experiments are conducted safely and securely. CDDO has published the Generative AI Framework today which sets out guidance for government departments on the safe, responsible and effective use of generative AI.

As always, we’re going to share our work, particularly with our cross-government data science colleagues so please watch this space. If you’re interested in this work, we’re looking for individuals and partners to join us on this programme of AI experiments, so please get in touch via govuk-enquiries@digital.cabinet-office.gov.uk if you’d like to be involved. We’ll be recruiting full time Data Scientist, Data Engineer and more positions in the team as we build our capabilities in this area. Please search Civil Service Jobs to apply.

Sharing and comments

Share this page

4 comments

  1. Comment by David Durant posted on

    Thanks for posting this - I'm sure this is the first step in what will be a hugely influential journey. I have a couple of related questions.

    1) Have you used a different AI to generate sample questions by using the data of what is typed into the GOV.UK search box and then had humans check the outputs produced?

    2) What's the range of your test users? Do they all work for government? I'm particularly interested in where user research has been done yet with people who have low digital skills and vulnerable users with complex needs.

    Regarding the latter, I've often seen people in that situation intensely feel the need to share the details of their personal situation as part of their asking the government for help. I imagine this will happen a lot in this situation. Whether that information is just filtered out or is used for more personalised complex signposting will be major area to decide on. This is even more likely in the future if the system is combined with GOV.UK Login and stores persistent data about the user.

    • Replies to David Durant>

      Comment by Matthew Gregory posted on

      Hi David,
      Thank you for your thoughtful questions and interest in our work.
      Regarding the generation of sample questions, in this initial phase, we did not use a separate LLM to generate sample questions from GOV.UK search data. Instead, we asked expert content designers to create a set of ideal question-answer scenarios. These were used to benchmark the quality of the AI's responses. While we did experiment with having another LLM assist in evaluating these responses, we relied on manual assessment by domain experts. These are the basis of the reported statistics in the blog.
      As for the range of our test users, in the last pilot phase, we recruited business users directly from GOV.UK, targeting individuals involved in or starting a business. We plan to delve deeper into this user research in a future blog post.
      Regarding users sharing personal data, we are acutely aware of the sensitivity of this issue. Our current system has safeguards to filter out personal data from queries.
      Thanks,
      Dr Matthew Gregory, Lead Data Scientist, GOV.UK

  2. Comment by Will posted on

    Looks really interesting - is the source code for the prototype/PoC available?

    • Replies to Will>

      Comment by Matthew Gregory posted on

      Hi Will,
      Thankyou for your feedback. The code is currently private.
      Best regards,
      Dr Matthew Gregory, Lead Data Scientist, GOV.UK