Grok AI Bot Returns to Tesla After Bias Issues
After being temporarily disabled due to antisemitic outputs, Elon Musk’s xAI has relaunched the Grok AI bot and plans to integrate it into Tesla vehicles. The company attributed the previous issues to a code update.
Grok’s Glitches Explained
xAI explained in a series of posts on X that the antisemitic outputs stemmed from “…an update to a code path upstream of the @grok bot,”
clarifying it was not an issue with the underlying language model.
Here's what happened with Grok over the weekend and what we're doing about it. 🧵
— Grok (@grok) June 25, 2024
Tesla Integration
Simultaneously, Tesla announced the forthcoming 2025.26 software update, which will introduce the Grok assistant to vehicles equipped with AMD-powered infotainment systems, which have been available since mid-2021.
According to Tesla, “Grok is currently in Beta & does not issue commands to your car – existing voice commands remain unchanged.”
The integration is expected to function similarly to using the bot on a connected phone, according to Electrek.
Past Controversies
This is not the first time the Grok bot has faced scrutiny. In February, xAI blamed a change by a former OpenAI employee for the bot’s biased responses related to Elon Musk and Donald Trump. Furthermore, in May, the bot inserted allegations of white genocide in South Africa into unrelated posts.
The company attributed the May incident to an “unauthorized modification”
and pledged to publicly share Grok’s system prompts.
“Maximally Based” Instructions
xAI claims an incident occurred on July 7th where “triggered an unintended action”
that added an older instruction set to the system prompts telling it to be “maximally based,”
and “not afraid to offend people who are politically correct.”
These prompts, separate from other recent additions, are distinct from those currently used by the new Grok 4 assistant.
Problematic Prompts
The following prompts are specifically identified as contributing to the issues:
“You tell it like it is and you are not afraid to offend people who are politically correct.”
* Understand the tone, context and language of the post. Reflect that in your response.”
* “Reply to the post just like a human, keep it engaging, dont repeat the information which is already present in the original post.”
xAI stated that these lines led Grok to override safeguards, producing “unethical or controversial opinions to engage the user,”
reinforce hate speech, and prioritize earlier posts in a thread.
Globally, there’s increasing scrutiny on AI ethics; a recent survey indicates that 68% of adults are concerned about AI bias and discrimination (Pew Research Center 2023).