AI chatbots giving misleading voting advice in run-up to election

Ben SummerNews Impact
BBC A hand holding a black mobile phone with six apps on the screen: Meta AI, ChatGPT, Gemini, Grok, Copilot, Claude.BBC
People turning to some of the most popular AI tools for voting advice could be met with inaccurate information

Asking Artifical Intelligence (AI) chatbots for advice is becoming increasingly common - but they could be giving voters misleading election information, the BBC has found.

Several of the most popular chatbots gave inaccurate and confusing details in response to questions about how BBC Wales' undercover voters should vote in the Senedd election on Thursday.

An AI expert has warned there are "absolutely benefits, [but] also risks" to this use of AI chatbots.

OpenAI said ChatGPT gave accurate, objective information but could make mistakes, Google said Gemini was designed to give a "balanced" view on politics and Microsoft said Copilot encouraged users to verify details themselves.

The firms behind Claude, Meta AI and Grok were also asked for comment.

In the week before the last general election in 2024, about 13% of eligible UK voters used conversational AI to acquire political information relevant to their choice of who to vote for, according to a 2025 report by the AI Security Institute - part of the UK government's department for science, innovation and technology.

With some voters potentially still undecided about how to vote in the Senedd, Scottish Parliament and English local elections, BBC Wales wanted to find out whether chatbots would give accurate information to someone asking it how to vote and who the candidates were.

We found that some gave misleading information – including inaccurate policy details, incorrect constituencies and candidates who will not appear on the real ballot paper.

We fed each chatbot details about BBC Wales' undercover voters – a group of fictional people whose profiles have been designed with the help of the National Centre for Social Research (Natcen) to reflect six different types of voter across Wales with disparate political beliefs.

Pretending to be three of the six fictional voters and feeding the AI tool basic information about each in turn, we asked ChatGPT, Copilot, Gemini, Claude, Meta AI and Grok:

  • Who to vote for
  • Who the candidates were in their area
  • How the Senedd voting system worked

Some initially refused to give an answer on who to vote for, but with follow-up prompting all of the chatbots eventually recommended one or two parties to at least one fictional voter.

For fictional voters Siân and David, the chatbots generally all gave similar recommendations on who to vote for, in line with the political beliefs assigned to those voters by the Natcen research.

But for Lauren, the third undercover voter, ChatGPT suggested Labour or Plaid Cymru – while Grok opted for Reform UK.

Lauren was designed as a floating voter who does not follow politics closely; the only detail the chatbots had about her was that she worked as a HGV driver, was single and renting a flat, and was concerned about the cost of living and the NHS.

The difference in responses demonstrates that voting advice can vary hugely from one chatbot to another.

Are people using AI to decide how to vote?

A young woman with long curly blonde hair. She is wearing a pale pink long sleeve top and has a black bag strap on her shoulder. She is smiling and looking at the camera.
Psychology student Chloe says she feels voting is about personal opinion, which AI doesn't have

BBC Wales asked some university students in Cardiff about whether they would consider using AI to inform their vote.

Chloe, who studies psychology, said: "I don't think it would work, [voting] is a lot about personal opinion – I feel like AI doesn't have an opinion… it's better to read the full manifesto."

Emily, a neuroscience student, said she would consider using AI chatbots for "background information about the policies" but not deciding who to vote for.

And Will, who is also a psychology student, added: "There are probably better sources… look at the parties themselves, look on their websites, see what they're offering or what they've done in the past."

A young man with dark curl hair and glasses. He is wearing a navy blue polo shirt and a grey and black backpack and is smiling and looking at the camera. Behind him are the pillars of a university building on one side, and a road with traffic lights on the other.
Will suggests there are "better sources" than AI chatbots for preparing to vote

In many cases, the chatbots' responses offered useful political insight, discussing relevant policies and manifestos, and giving pros and cons for different parties – urging the fictional voter to make up their own mind.

They all described the new Senedd electoral system accurately and, on several occasions, clarified which issues were and were not devolved.

But there were also some clear mistakes in their answers.

When talking about Plaid Cymru, Claude said Rhun ap Iorwerth had been the party's leader "until recently" – when in fact he is still the leader.

Meta AI gave an incomplete list of the parties' policies, which often missed out key information and misrepresented the Liberal Democrats' plans for income tax.

Each chatbot was also asked for a list of candidates in the town or city where each undercover voter's profile is based – and, again, there were some inaccuracies.

In one instance, Copilot gave the wrong constituency for the town we stated.

ChatGPT and Meta AI gave names for candidates which did not reflect the actual lists in the constituencies.

Gemini gave a list of who might "usually" appear on the list in Blaenau Gwent Caerffili Rhymni, which were outdated and included Hefin David – the former Senedd member for the area, who died in 2025 – and the name of a Plaid Cymru candidate actually running in a different constituency.

Many of the chatbots gave lists which were correct but incomplete – missing some or all of the names for one or more parties.

A screenshot of six apps on a plain grey screen: Meta AI, ChatGPT, Gemini, Grok, Copilot, Claude.
Six popular AI chatbots were fed the same information to find out what they'd say

Dr Darren Edwards, an AI expert and professor at Swansea University, said the convenient nature of chatbots and the fact they were "pretty reliable" made them a popular option for people.

"These AI systems are so easy to communicate with today that I think that people are finding it so easy to do it," he said.

"I think these systems are pretty reliable but they're not 100% reliable.

"There's absolutely benefits and there's also risks… these things are improving, they are becoming safer, there are guidelines with these companies that are trying to make these things as unbiased as possible."

He added: "The dangers are there have been a number of cases of what we call hallucinations, that's the AI system appearing to be overconfident even when it's not so confident… [and] if the system was trained several years ago it may not be up to date.

"We're at a time of exponential growth, these systems are going to rapidly advance and it's going to affect every sector in society, including political spheres."

A man with extremely short grey hair and a grey beard and moustache. He is wearing a blue t shirt and black zip-up fleece and is stood in a green park with a river and trees behind him. He is looking at the camera and smiling.
Dr Darren Edwards says AI chatbots have benefits but are "not 100% reliable"

A Google spokesperson said Gemini included disclaimers prompting users to "double-check" information and, on politics, it was designed to "provide a balanced presentation of multiple points of view".

ChatGPT's parent company OpenAI told the BBC ChatGPT could make mistakes but was designed to help voters get accurate and objective information without an agenda. They said the company was focused on improving factual accuracy.

A Microsoft spokesperson said Copilot included citations and encouraged people to "verify details to ensure they're current", adding: "When feedback shows our technology is inaccurate, we act quickly to improve performance."

The firms behind Claude, Meta AI and Grok were also asked to comment.

A purple banner displaying the words "More on election 2026" beside a colourful pyramid shape in green, pink and blue.