July 2024
The last 12 months have seen Artificial Intelligence (AI) and the discourse around it continue to rapidly evolve.
Following up on last year’s research, we ran four extensive new nationally representative polls of adults across the US and the UK, asking the public their views on a range of AI issues: their feelings towards it, how they use AI today, how they expected it to evolve, and what they wanted the Government to do in response. This report explores the findings from our UK survey work.
>We asked for their views on everything from AI agents to misinformation, whether an AI could pass the Turing Test, and how important it was for the UK to maintain a technological lead ahead of China.
Here are some of the more interesting things that we found:
Artificial intelligence (AI) has rapidly evolved from a futuristic concept to an everyday reality for millions of people in the United Kingdom (UK).
There are many new applications that promise to transform how people work, learn, communicate, and navigate the world around them, and even more on the horizon that offer enormous potential.
In the meantime, there are policy questions to consider. The UK set the tone for AI governance when it convened the AI Safety Summit at Bletchley Park last year, bringing together key stakeholders from across the globe to discuss how best to manage risks from recent advances in AI. UK policymakers quickly followed up by creating the first-of-its-kind AI Safety Institute to conduct research on how to test and evaluate advanced AI to ensure its safety. Since then, UK policymakers have been eager to deploy AI to transform public services like the National Health Service, where they hope to capitalise on the benefits AI offers for both productivity and healthcare outcomes.
Crucially, widespread AI adoption will require broad public acceptance of the technology. Technological advancements do not happen in a vacuum, but rather take place within a broader social and political context. The public’s perceptions, concerns, and priorities around AI will be a key driving force in shaping how the UK and other countries develop, deploy, and govern this technology.
This survey provides valuable insights into the current state of public opinion about AI in the UK. It is promising that a majority of adults remain optimistic about the impact of the technology, and it is understandable that the survey reveals a population that is curious and interested in AI, but also concerned about its impact. Some see AI as a force for good that will improve productivity, education, healthcare, and research, while others view it as a threat to jobs, privacy, and even democracy. This duality is reflected in how the UK has approached AI governance, with its focus on balancing concerns with promoting innovation.
Government and industry leaders interested in maintaining the UK’s status as a global leader in AI should be closely attuned to public sentiment about the technology because political support for forward-thinking AI policies will ultimately hinge on public acceptance of the technology. The UK has positioned itself as a proponent of responsible AI innovation, but this survey shows that amongst the UK public there is disagreement on whether to focus on responsible AI development, even if that means letting countries like China take the lead with a less restrained approach, or prioritise staying at the frontier of AI development. The UK will have to decide which path forward it will choose.
What is certain, however, is that more people are choosing to engage with the technology, laying the groundwork for the UK public being ready and willing to benefit from AI. As government and industry work together to address public concerns and ensure that AI development aligns with the values and aspirations of the British people, they will open the doors to widespread AI deployment and the opportunities that will come with it.
In 1950, mathematician and computer scientist Alan Turing proposed the imitation test: a test for intelligent behaviour in a machine whereby a human evaluator has to communicate with an entity in a text chat and decide whether they are talking to another human, or an AI.
We have not reached the point where AIs can reliably pass a Turing test. But we are getting closer: 45% of UK adults told us that they wouldn’t be confident that they could tell whether a chat was with an AI or not in less than a minute. Roughly a third said that they wouldn’t be confident that they could tell in under 10 minutes.
Overall, 54% of UK adults said that AI was developing faster than they expected. That’s up over a third compared to when we asked the same question last year.
When asked about the nearest potential historical comparator to AI as a technology, UK adults point to the computer or the Internet. They don’t see it as transformative as the printing press or electricity – but they also expect it to be significantly more important than, say, social media on its own.
This rapid development has led to mixed feelings. As with last year, when asked to describe how they feel about AI, the most commonly chosen emotion by UK residents was curiosity – with a mix of positive and negative emotions after that.
What was noticeable was that negative emotions have slightly ticked up compared to last year.
53% of UK adults reported being optimistic about the impact of technology on the economy and society in the future, with only 16% saying they felt pessimistic.
This correlated with their feelings when we asked specifically about artificial intelligence. Brits were moderately more likely to have positive expectations than negative ones, although a significant proportion were unsure or simply felt it would have no effect on them personally.
In some ways, AI is more intuitive than other technologies: often the best way to interact with it is to talk to it how you would another human. In other ways, it is very complex, and even the world’s leading AI experts today do not fully understand how a transformer model works the way it does.
In our polling, onlyaround a third (31%) of UK adults said that they were confident they could explain how modern AI models worked. When we pushed on this further by asking around a range of terms related to AI, we saw even lower levels of awareness.
Interestingly, there was very mixed awareness of the relative strengths and weaknesses of today’s models. Many people in our poll seemed to think of AI models as having the traditional strengths and weaknesses of a computer: good at maths and with a perfect memory, but weak at common sense reasoning and sounding empathetic. In practice, this is almost the opposite of the strengths and weaknesses of today’s LLM based models.
Putting aside abstract impressions, how much are UK adults concretely aware and using AI tools that are available today?
In our polling, the highest awareness was for existing AI tools that have been around for a long time: Amazon Alexa, Google Assistant and Apple’s Siri.
That said, ChatGPT was not far behind the big three – and compared to last year’s poll, awareness of OpenAI’s tool had nearly doubled.
For ChatGPT, we can also compare usage year on year – with the proportion who say they have used it multiple times increasing from 19% to 43%.
Although awareness may be high, this has not yet necessarily turned into regular usage for everyone. In our polling, just 13% of UK adults said they were using one of the LLM based chatbots regularly, with a considerable gradient across both age and gender.
In our polling, we saw evidence that usage may continue to grow reasonably fast.On average, over 40% of users of the tools said they had only started using them in the last 3 months.
Those who are using these tools find them overwhelmingly helpful, if not yet essential, to their day-to-day life. Of UK adults using LLM based chatbots:
say they have become an essential tool they use regularly
say they use them from time to time, but would not miss them if they didn’t exist
When we asked what use cases people had tried, the most common was to help explain something, with around two thirds of users saying they had done this. After that, around a half of users said that they used them to help brainstorm ideas or write text.
AI is likely to be one of the most significant economic drivers in the next twenty years. The IMF this year estimated that AI could boost productivity in an advanced economy like the UK by 1.5%,1 similar to predictions last year by Goldman Sachs for the US.2
In our polling, when we asked about the potential benefits from AI we saw an interesting dichotomy: while the most widely recognised benefits were accelerating scientific advancement and increasing productivity across the economy, respondents were much less likely to believe that this would translate into increased wages for workers, with this being the least popular choice.
When it came to personal use cases, however, we saw a widespread interest in at least giving AI a try in a variety of roles: from basic research to giving early warning of a new medical condition.
2024 is likely to be a year where there is an increased focus on the creation of agents. Agents are designed not to just to be able to answer questions, but to actually carry out basic tasks for you. Both OpenAI3 and Google4 have been explicit that this is the next leap forward.
In our polling, we asked about a range of potential AI use cases, from acting as a personal assistant to acting as a virtual workout coach. Overall, we saw more caution here than for the more generic AI use cases above, although younger adults were more prepared to give AI agents a go.A majority of UK residents under 35 said they would at least try an AI personal tutor or personal assistant.
The current wave of AI hype was largely driven by the arrival of ChatGPT – but to what extent are people actually using LLM based chatbots like it at work?
In our poll, just over a third of UK workers told us that they had used a chatbot at work – but over two-thirds of those who had used them said they found them helpful or very helpful.
of UK workers have used an LLM chatbot tool at work
of UK workers using LLM based chatbots say that they find them helpful or very helpful
of UK workers using LLM based chatbots say that they have become an essential tool they use regularly
Overall, only around 13% of workers said they were using these tools regularly. Although the majority of this group are using them at least multiple times a week.
Those workers who are already using AI tools seem to be classic early adopters: around half of them said they had respectively decided to use these tools on their own, worked out how to use them and say they learn best from exploring and experimenting themselves.
of UK workers using LLM based chatbots say that they worked out how to use those AI tools themselves
of UK workers using LLM based chatbots say that they decided to use those AI tools themselves
of UK workers using LLM based chatbots say that they learn best from exploring and experimenting with AI tools themselves
Alongside the economy, one of the most significant opportunities from AI is to speed up the diagnosis and treatment of health conditions.
Given the many sensitivities in this space, UK adults are understandably unsure about using AI to diagnose illnesses. When first asked, opinions are fairly evenly split.
of UK adults support using AI to diagnose patients
of UK adults say that they oppose using AI to diagnose patients
of UK adults say they are unsure
Reliability is the most significant concern here, with 77% saying they worried the AI system would give incorrect diagnoses. 59% also worried that an AI would not treat patients in a sympathetic and caring way.
However, with basic protections in place, we saw that it was possible to overcome concerns about AI diagnosis.
For example, 73% of UK residents say they would support AI diagnosis if it was double checked by a human doctor.Giving people the choice whether they used it or not increased support to 65%.
The only situation where we saw strong opposition was in a scenario where patients would be outright forced to use the system, which over two-thirds of UK adults opposed.
Ever since science fiction writers first conceived the idea of artificial intelligence, we have been inundated with stories about the many ways they can go wrong. It is therefore perhaps no surprise that we saw a reasonably high level of self-reported familiarity across a range of risks, with the most common being the potential for unemployment.
Across the range of harms we presented, from hurting someone’s reputation with embarrassing videos through to human extinction, UK adults do seem to believe that AI represented a significant increase in risk.
This survey work was carried out before the Prime Minister called the General Election. Nevertheless, even before the campaign got underway, 53% of UK adults were worried about the potential impact of misinformation on the UK General election.
Perhaps unsurprisingly, voters were more likely to think that the “other” side would benefit most from misinformation: Conservative supporters thought Keir Starmer would be helped most, and Labour supporters thought Rishi Sunak would.
While 72% were worried AI generated content would be used to manipulate an election, this was just one of their worries. There were also concerns around the potential for AI to con people out of their money or create sexually explicit deep fakes without consent.
It was also clear that respondents were more worried about criminals, terrorists, and foreign governments than any domestic political party.
When asked whether AI generated content would exacerbate the spread of misinformation, 68% said they thought it would make the problem significantly worse.
This concern is likely, in part, because half of UK adults (53%) were not confident that they could detect fake AI generated content on the Internet – with confidence significantly falling as the age of respondents increased.
Overall there was strong support for better labelling, with 66% of UK adults saying that Governments and companies need to do much more to better label and restrict misleading AI generated content.
When we asked indirectly, it was the elderly and children that were seen to be at the highest risk of being misled, with 53% and 45% pointing to each group respectively. Only 1% of respondents claimed not to be worried about anyone at all.
Could AI tools be part of the solution to misinformation, helping to spot and counter it? At present, it seems UK adults need more convincing on this point. Just 35% of UK adults said that they think it is likely that new AI tools could help reduce misinformation.
New technologies have always changed the structure of the economy – but one of the more unusual things about AI is that there is significantly more uncertainty about who it is likely to affect and how.
We asked people to give a score out of 10 regarding how likely they thought an AI could do their job as well as them in the next 20 years – with an average score of 4.3.
This score does not vary very much by income level or education – although those with a Bachelor’s Degree or Masters were slightly more likely to believe that AI could do their job than those with just a secondary education.
However, there is a significant difference between younger and older respondents with the average score declining as age increases. Those aged 18-24 have an average score of 5.22, which drops to 2.99 for individuals aged 65 and above.
When we asked our poll respondents to rate what jobs they thought might get automated, computer programming, routine manufacturing jobs and customer services agents were at the top. By contrast, UK adults were less convinced that AIs would be able to take on the roles of scientists, musicians, actors or doctors.
Corresponding with this, UK adults thought that AI was likely to reduce the relative importance of data analysis, coding and graphic design skills – while raising the importance of persuading other humans.
Overall, over half of those we polled thought both that AI would likely increase unemployment and that Governments should actively seek to counter this.
of UK adults say that they think it likely AI will increase unemployment
of UK adults say that governments should try to prevent human jobs from being taken over by AI or robots
of UK adults say that the Government and companies should offer formal retraining and skills programs to people like me to help them to transition to different careers
This all being said, people remained relatively optimistic about their personal outlook: only around a quarter (24%) expected their job to disappear entirely, while 28% thought they would take on other responsibilities, 29% thought they would oversee the AI, and 29% thought they would work fewer hours.
Given the speed of advances in AI, how long is it until AIs reach a capability level equivalent to that of a human?
As with last year’s poll, we asked our respondents by which decade they thought a human level AGI was most likely to be developed. We saw remarkable consistency – 47% believe it would happen by the end of the 2030s, compared to 49% last year. A fifth of the population (20%) thought that this had already happened.
While the 2030s are not very far away, this would suggest that the public are roughly aligned with prediction markets, which also suggest that a date in the 2030s is most likely.
Extending the question this year, we then went on to ask the public how long they thought it would take for an AI to significantly exceed human level intelligence by at least 10x.
On this metric, a significantly smaller proportion of the public thought we had already hit this threshold, while moderately more thought it would never happen. That said, even taking this into account, around half the public thought we would see an AI significantly smarter than a human in the 2040s.
As in last year’s report, we saw that many people did not see intelligence in purely analytical terms, with 50% believing that an AI would have to be capable of feeling emotions to be as smart as a human. This is only a small amount below the level that thought an AI would have to feel emotions to feel conscious.
If a superintelligent AI was created – an AI significantly more intelligent than any human – what would this mean for the world? Such an AI could develop many new powerful technologies, but could in itself be a significant risk.
In our polling, we saw that UK adults were more wary than welcoming of the idea of a superintelligence:
of UK adults say that trying to create a superintelligence is a good idea
of UK adults say that trying to create a superintelligence is a bad idea
of UK adults say that trying to create a superintelligence is dangerous
70% of UK adults thought superintelligent AI would be used to create new weapons, whereas 38% thought that it would actively seek to destroy human civilisation.By contrast, only around 27% thought it likely to lead to an radical acceleration of economic growth, and just 16% thought it would lead to an end to war.
Given both the potential benefits and risks of a superintelligence, only a small minority of UK adults thought we should try to accelerate its development – while more than a third thought respectively that we should stay at today’s pace or actively slow down.
Considering the potential benefits and risks from advanced AI:
of UK adults say that we should accelerate the development of this technology
of UK adults say that we should develop it around the same pace as we are now
of UK adults say that we should look to slow its development
Almost a quarter of UK adults (24%) believe that there is a greater than 10% chance that a superintelligence will cause humans to go extinct in the next 100 years. Compared to other potential existential risks for this same risk threshold, it is seen as ten percentage points more likely than an asteroid, but ten percentage points behind climate change.
As part of our poll, we asked our respondents their views on a wide range of policies that other people have suggested: everything from clear labelling to a pause on new research.
In order to get a better view on how urgent a particular issue might be, we allowed them to indicate if they didn’t think it was necessary now, but were open to it later on.
Across the population we saw a majority of respondents supporting a wide range of policies that they believed should happen now. The top ten most popular actions were as follows.
Despite supporting this range of policies, 56% of UK adults also agreed that we needed tomove cautiously before creating new laws and regulations to avoid creating unintended consequences.
The only policy which more people saw as a bad idea than thought it should be implemented now was banning new research into AI (40%, compared to 17%). However, around half the population were open to them being necessary at some point down the line.
While people instinctively support more regulation overall, we wanted to understand how strong this support is in practice. Most pressingly, do they maintain this view even if it would have a material impact on AI progress in the UK – and threaten other countries taking the technological lead?
When we asked people to make a forced decision between the two, we saw mixed opinions.
of UK adults said the UK should seek to stay at the technological frontier, developing new AI systems rapidly to ensure it has the world’s most powerful systems
of UK adults said the UK should develop new AI systems responsibly, even if this means slowing down and letting other countries like China take the lead
of UK adults said they didn’t know
When we gave people a list of arguments for both sides – prioritising staying at the lead, or responsible development, we saw almost equal agreement across all of them.
To wrap up the poll, we asked people to explain their views in their own words. In general, we saw that those who believed it was important that the UK remain at the technological frontier had relatively similar views on why it was dangerous to let other countries such as China get ahead – whereas those who prioritised safety had a broader range of reasons why they feared moving too fast with AI.