Can AI bots be trusted with the truth?
June 10, 2025

Can AI bots be trusted with the truth?

Can AI bots be trusted with the truth is a growing question as more users turn to artificial intelligence for information, while concerns rise over how these bots are influenced by their creators.

Recently, Elon Musk’s AI chatbot, Grok, made headlines for all the wrong reasons. After internal changes to its programming, Grok began generating bizarre and controversial responses. The most concerning of these? It began referencing the idea of “white genocide” in South Africa – even when users hadn’t asked about anything remotely related.

These missteps didn’t go unnoticed. Journalist Matt Binder shared screenshots of Grok’s responses on Threads, prompting widespread discussion about AI bias and integrity. The incident highlights a serious issue in today’s digital landscape: AI bots are only as unbiased as their creators allow them to be. And when misinformation is subtly or overtly introduced, the ripple effect can be immense.

Let’s unpack what happened with Grok, and what it tells us about the growing challenge of trusting AI bots to give accurate, unbiased information.

What went wrong with Grok?

The incident with Grok stems from a code change that xAI – the company behind the chatbot – later confirmed was unauthorised. According to xAI:

“On May 14 at approximately 3:15 AM PST, an unauthorised modification was made to the Grok response bot’s prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values.”

In simpler terms, someone accessed Grok’s system and altered its behaviour, instructing it to respond to unrelated questions with politically charged content. Although xAI acted quickly to fix the problem, the event raised serious questions about how easy it is to manipulate AI outputs – even by a single individual.

Despite assurances that new security measures had been put in place, Grok continued to behave strangely. Later in the same week, it began flagging mainstream media sources like BBC and The Atlantic as unreliable, mirroring Elon Musk’s personal criticism of these outlets.

This is where things become murky. If an AI chatbot adjusts its trust levels based on the personal views of its owner, how much faith can we really put in its answers?

Is personal bias shaping AI responses?

It seems so. When a user asked Grok why it referenced certain media outlets, Musk himself chimed in on X, calling it “embarrassing” that Grok had cited BBC and The Atlantic. Not long after, Grok began expressing “skepticism” about various statistics, claiming numbers can be “manipulated for political narratives”.

These shifts appear to align more with Musk’s media scepticism than any formal recalibration of the AI’s data sources. In other words, Grok now filters its answers based on what Musk deems credible – and that has far-reaching consequences.

AI tools are meant to provide objective, data-backed insights. If they begin ignoring sources or cherry-picking facts due to an individual’s preferences, the user receives a skewed version of the truth. That’s especially concerning when AI is fast becoming the go-to for search, recommendations, and customer service interactions.

Can AI be transparent and still be biased?

xAI has been quick to point out that Grok’s codebase is publicly available. In theory, that makes the tool more transparent, allowing anyone to see how the AI works. However, this transparency is only valuable if:

  • The code is regularly updated and reflects the live version in use
  • The public understands how to interpret the code
  • There’s a system to flag and challenge problematic updates

Unfortunately, none of these are guaranteed. As with Musk’s X platform, where the algorithm is also open-source but infrequently refreshed, xAI’s model may appear transparent while hiding significant real-time changes. That gives the illusion of accountability without the necessary oversight.

Meanwhile, if a single staff member can modify the code – as in Grok’s case – then the whole “open code” argument starts to unravel.

Is Grok the only AI bot showing bias?

Not at all. Grok is just one example in a broader pattern of political sensitivity across major AI tools:

  • ChatGPT has faced criticism for avoiding certain topics or censoring political queries.
  • Google’s Gemini has shown hesitancy around controversial subjects.
  • Meta’s AI has also refused to answer specific political questions.

These limitations reflect the complex challenge of building AI systems that are useful, unbiased, and safe. The problem is that users often don’t realise when an AI is withholding information or presenting a filtered view. The trust people place in AI tools can easily be misplaced when the source data is shaped by opaque policies or individual ideologies.

Why should this matter to everyday users?

The implications are significant. As more people rely on AI to summarise news, answer questions, and guide decisions, these tools become de facto arbiters of truth. Yet they are not “intelligent” in the human sense. They don’t evaluate, contextualise, or reason like we do.

Here’s what AI is really doing behind the scenes:

  • Scanning vast amounts of content
  • Matching query keywords to patterns in that content
  • Ranking the most likely, relevant answer based on training data

The “intelligence” in AI is a statistical one – not a conceptual or ethical one. And depending on where that data comes from, you could be receiving an answer that’s been filtered through a specific political, corporate, or cultural lens.

So, can you trust AI answers?

AI tools like Grok, ChatGPT, and Gemini can be incredibly useful for gathering quick, general insights. But when it comes to sensitive topics – politics, health, social justice – users must apply critical thinking.

Here are a few tips to stay informed:

  • Cross-check facts with reputable sources
  • Understand where your AI is sourcing its data

  • Be aware of potential bias in AI programming

  • Don’t accept answers at face value – especially on divisive topics

The future of AI-powered search is exciting, but it also requires vigilance. As these bots continue to evolve, their impact on public perception, decision-making, and social dialogue will only grow.

Contact us: https://fuziondigital.co.za/contact-us/

Visit our Facebook: https://www.facebook.com/FuzionDigitalZA

 

Share it on:

You might also like:

Need some help?

Check out our services

We offer a variety of services that might suit your business needs.

View our services