Home » Entertainment » -title AI Models Frequently Misreport News, Study Finds

-title AI Models Frequently Misreport News, Study Finds

AI Chatbots Frequently Misrepresent News, Study Finds

A new​ study reveals that leading artificial ⁤intelligence models, including ChatGPT, Gemini, Copilot, and perplexity, ‍routinely provide inaccurate information when questioned about current events, with nearly half of all responses ‌containing important errors.Teh research, published Wednesday by the​ European Broadcasting Union⁢ (EBU) and the BBC, assessed over 2,700 responses from the AI⁢ assistants.

Twenty-two public media⁢ outlets from 18 ⁢countries and representing 14 languages participated in‌ the study, posing a standardized set of questions to the AI models between late May ⁣and early June. 45 percent of‍ the responses exhibited at least one “significant” ⁤issue. Sourcing ⁣was the most prevalent problem, appearing in 31 percent of responses, followed by ​factual inaccuracies ‌(20 percent) and a lack of contextual ​awareness (14 percent).

The study highlighted specific errors, ⁣such as Perplexity incorrectly stating the‍ legality ​of surrogacy⁣ in Czechia and ChatGPT erroneously identifying Pope Francis⁣ as the ​current pontiff after his reported death. Gemini demonstrated the highest ​rate ⁢of⁤ significant issues, with ​76 percent of its responses affected, primarily ⁢due ⁣to sourcing problems.

“They have ‍not prioritized‍ this issue and must do ‍so ⁢now,” stated Jean Philip De Tender, EBU’s deputy general, and‍ pete⁤ Archer, the head of AI ⁤at the BBC, in the report’s‌ foreword. They also urged tech companies to increase transparency by⁣ regularly publishing results broken down by ⁣language and market.

Requests for ‍comment from OpenAI, Google, ⁣Microsoft, and Perplexity were not immediately returned.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.