Billionaire Elon Musk’s artificial intelligence chatbot Grok, developed by his firm xAI, has drawn global attention for using profanity, insults, hate, and spreading disinformation on X. (Photo/Reuters)
Billionaire Elon Musk’s artificial intelligence chatbot Grok, developed by his firm xAI, has drawn global attention for using profanity, insults, hate, and spreading disinformation on X, sparking renewed debate over the reliability of AI systems and the dangers of placing blind trust in them.
Sebnem Ozdemir, a board member of the Artificial Intelligence Policies Association (AIPA) in Türkiye, told Anadolu that AI outputs must be verified like any other source of information.
“Even person-to-person information needs to be verified, so putting blind faith in AI is a very unrealistic approach, as the machine is ultimately fed by a source,” she said.
“Just as we don’t believe everything we read in the digital world without verifying it, we should also not forget that AI can learn something from an incorrect source.”
Ozdemir warned that while AI systems often project confidence, their outputs reflect the quality and biases of the data they were trained on.
“The human ability to manipulate, to differently convey what one hears for their own benefit, is a well-known thing – humans do this with intention, but AI doesn’t, as ultimately, AI is a machine that learns from the resources provided,” she added.
She compared AI systems to children who learn what they are taught, stressing that trust in AI should depend on transparency about the data sources used.
“AI can be wrong or biased, and it can be used as a weapon to destroy one’s reputation or manipulate the masses,” she said, referring to Grok’s vulgar and insulting comments posted on X.
‘MechaHitler’
The controversy around Grok has triggered a wave of reactions across social media and tech forums.
While some praised its unfiltered style as a more ‘honest’ alternative to sanitised chatbots, many expressed alarm at its tendency to spread conspiracy theories and offensive content.
One user on X, for example, accused Grok of glorifying violence after it referred to itself as “MechaHitler” and praised Adolf Hitler in a post that drew sharp backlash and led to a public apology from xAI.
Screenshots of Grok’s antisemitic replies circulated widely on X, where some users questioned whether Musk’s so-called “free speech absolutism” was veering into reckless territory. “At least Grok doesn’t coddle you—truth hurts,” wrote one supporter, while another responded, “If this is honesty, it’s also ignorance dressed as boldness.”
Another user observed that Grok’s responses often seem formulaic—even joking about its own catchphrase, “truth hurts”.
Controlling AI
Ozdemir explained that rapid AI development is outpacing efforts to control it: “Is it possible to control AI? The answer is no, as it isn’t very feasible to think we can control something whose IQ level is advancing this rapidly.”
“We must just accept it as a separate entity and find the right way to reach an understanding with it, to communicate with it, and to nurture it.”
That view was echoed across online discussions, where Many users described AI not as an independent source of truth but as a mirror reflecting human behavioral patterns.
Ozdemir also cited Microsoft’s 2016 experiment with the Tay chatbot, which learned racist and genocidal content from users on social media and, within 24 hours, began publishing offensive posts.
“Tay did not come up with this stuff on its own but by learning from people – we shouldn’t fear AI itself but the people who act unethically,” she added.
Given these concerns, critics argue that deploying a chatbot with a documented history of generating offensive and misleading content in real-world systems could pose serious safety and reputational risks.
Several EU countries, including Poland, have lodged formal complaints to the European Commission, and a Turkish court has blocked certain Grok content due to offensive remarks.
___
Source: TRT