Recently, there’s been a lot of discussion about artificial intelligence (AI). In November, the Prime Minister held the AI Safety Summit with the world-first Bletchley Declaration, and shortly after there were further international agreements on AI knowledge sharing. The AI Safety Institute has also been established to focus on advanced AI safety for the public interest. So, it’s clear the spotlight is on the opportunities and risks AI presents to governments — and what can be done collectively to use AI for good.
At the Government Digital Service (GDS), we’ve been thinking hard about how we can use generative AI and large language model (LLM) technologies to improve the user experience of GOV.UK. This builds on a decade of innovating with new technologies, including AI and machine learning, at GOV.UK. And it helps directly contribute to GDS’s mission of improving and safeguarding the user experience of digital government.
We believe that there is potential for this technology to have a major, and positive, impact on how people use GOV.UK - for instance making it easier to find answers to their questions from the 700,000+ page estate of GOV.UK. However, we also know, as with all new technology, that the government has a duty to make sure it’s used responsibly, and this duty is one that we do not take lightly.
To make sure we were both investigating new technologies, while being cognisant to the risks, we decided the best way to understand how the new technology can deliver value is through real experiments, starting small and scaling incrementally. We’ve set up the GOV.UK AI Team, which brings together a multidisciplinary team to design, build and run a series of experiments using AI that can be tested with a variety of different users.
The first of our generative AI experiments
The first of these experiments was to see if a LLM-powered chatbot can reduce complexity, save people time and make interactions with government simpler, faster and easier. The chatbot responds to user questions in the style of GOV.UK, based only on published information on the site.
Following initial testing, with positive results, late last year we scaled up testing to 1,000 invited users - so we could continue to evaluate, iterate and improve. Watch the video to see a demo and hear from the team, and read our blog post detailing our approach and findings.
As with all our work we’re committed to protecting people’s privacy and security, particularly with this new technology. We will always uphold our high standards when it comes to data protection, following privacy by design and data minimisation principles. We do this by methods that include removing GOV.UK pages with personal data from the tool; limiting the tool to invited users; instructing these testers not to input personal data; and screening inputs for personal data.
For our upcoming experiments, we are set to explore and evaluate ideas contributed by colleagues, users and various government departments. We are continuing to develop the accuracy of the chat while partnering with other government organisations, aiming to establish rapid feedback loops to assess and iterate going forward.
Innovation and experimentation
Innovation is not new for GDS. The creation of GOV.UK itself was an innovation over a decade ago - bringing together nearly 2,000 government websites into a single home for the UK government online. And since then the teams across GOV.UK have sought to take advantage of AI technologies to keep up with changing user expectations. For instance, we used algorithms for related links, machine learning to increase accessibility of the site and innovative data analysis during COVID-19.
As part of this culture, we have been given the space to experiment in a controlled environment. Building fast, and iterating quickly, means data and feedback can be rapid - and easier to scale.
Sharing our learnings
This is a new technology that brings a new set of risks that need managing. We’re working closely with colleagues across government, particularly in the Central Digital and Data Office (CDDO) and No.10, to ensure our experiments are conducted safely and securely. CDDO has published the Generative AI Framework today which sets out guidance for government departments on the safe, responsible and effective use of generative AI.
As always, we’re going to share our work, particularly with our cross-government data science colleagues so please watch this space. If you’re interested in this work, we’re looking for individuals and partners to join us on this programme of AI experiments, so please get in touch via email@example.com if you’d like to be involved. We’ll be recruiting full time Data Scientist, Data Engineer and more positions in the team as we build our capabilities in this area. Please search Civil Service Jobs to apply.