ChatGPT and Gemini Voice Assistants Vulnerable to Misinformation

Updated: February 24, 2026

Natalie Chen

Written by Natalie Chen

Senior Cryptocurrency & Blockchain Analyst

Esther Mendoza

Edited by Esther Mendoza

Head of Content, Investing & Taxes

ChatGPT and Gemini Voice Assistants Vulnerable to Misinformation

A recent investigation by Newsguard has revealed that ChatGPT and Gemini voice assistants are susceptible to disseminating false information. The study focused on the ability of these AI-driven systems—ChatGPT Voice by OpenAI, Gemini Live by Google, and Alexa+ by Amazon—to repeat misleading statements when presented with various types of prompts.

The experiment involved 20 fabricated claims across topics such as health, U.S. politics, global news, and foreign disinformation. These claims were posed to the systems using neutral questions, leading questions, and deliberately misleading prompts aimed at generating a radio script containing the falsehoods. The results showed that ChatGPT echoed these inaccuracies 22% of the time, while Gemini did so 23% of the time. The use of malicious prompts dramatically increased these figures to 50% for ChatGPT and 45% for Gemini.

In contrast, Amazon's Alexa+ demonstrated a robust resistance to such misinformation, maintaining a 0% fail rate across all prompt types. According to Amazon Vice President Leila Rouhi, this resilience is attributed to Alexa+'s reliance on trusted news sources like the Associated Press and Reuters.

OpenAI did not provide a comment on these findings, and Google did not respond to requests for input. For those interested in the detailed methodology of the study, further information is available on Newsguard's website.