Close Menu
ceofeature.com

    Subscribe to Updates

    Subscribe to our newsletter for the latest leadership tips, exclusive interviews, and expert advice from top CEOs. Simply enter your email below and stay ahead of the curve!.

    What's Hot

    Swedish krona stock rallies too far, UBS warns of potential rebound

    February 17, 2026

    An enduring portrait of courage in the C-suite

    February 17, 2026

    USD/PKR faces limited movement as BofA cites balanced outlook

    February 17, 2026
    Facebook X (Twitter) Instagram
    ceofeature.com
    ceofeature.com
    ceofeature.com
    • Home
    • Business
    • Lifestyle
    • CEO News
    • Investing
    • Opinion
    • Market
    • Magazine
    Facebook X (Twitter) Instagram YouTube
    Subscribe
    ceofeature.com
    Home Grok’s ‘white genocide’ responses show gen AI tampered with ‘at will’
    Business

    Grok’s ‘white genocide’ responses show gen AI tampered with ‘at will’

    Daniel snowBy Daniel snowMay 18, 20255 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Muhammed Selim Korkutata | Anadolu | Getty Images

    In the two-plus years since generative artificial intelligence took the the world by storm following the public release of ChatGPT, trust has been a perpetual problem.

    Hallucinations, bad math and cultural biases have plagued results, reminding users that there’s a limit to how much we can rely on AI, at least for now.

    Elon Musk’s Grok chatbot, created by his startup xAI, showed this week that there’s a deeper reason for concern: The AI can be easily manipulated by humans.

    Grok on Wednesday began responding to user queries with false claims of “white genocide” in South Africa. By late in the day, screenshots were posted across X of similar answers even when the questions had nothing to do with the topic.

    After remaining silent on the matter for well over 24 hours, xAI said late Thursday that Grok’s strange behavior was caused by an “unauthorized modification” to the chat app’s so-called system prompts, which help inform the way it behaves and interacts with users. In other words, humans were dictating the AI’s response.

    The nature of the response, in this case, ties directly to Musk, who was born and raised in South Africa. Musk, who owns xAI in addition to his CEO roles at Tesla and SpaceX, has been promoting the false claim that violence against some South African farmers constitutes “white genocide,” a sentiment that President Donald Trump has also expressed.

    Read more CNBC reporting on AI

    “I think it is incredibly important because of the content and who leads this company, and the ways in which it suggests or sheds light on kind of the power that these tools have to shape people’s thinking and understanding of the world,” said Deirdre Mulligan, a professor at the University of California at Berkeley and an expert in AI governance.

    Mulligan characterized the Grok miscue as an “algorithmic breakdown” that “rips apart at the seams” the supposed neutral nature of large language models. She said there’s no reason to see Grok’s malfunction as merely an “exception.”

    AI-powered chatbots created by Meta, Google and OpenAI aren’t “packaging up” information in a neutral way, but are instead passing data through a “set of filters and values that are built into the system,” Mulligan said. Grok’s breakdown offers a window into how easily any of these systems can be altered to meet an individual or group’s agenda.

    Representatives from xAI, Google and OpenAI didn’t respond to requests for comment. Meta declined to comment.

    Different than past problems

    Grok’s unsanctioned alteration, xAI said in its statement, violated “internal policies and core values.” The company said it would take steps to prevent similar disasters and would publish the app’s system prompts in order to “strengthen your trust in Grok as a truth-seeking AI.”

    It’s not the first AI blunder to go viral online. A decade ago, Google’s Photo app mislabeled African Americans as gorillas. Last year, Google temporarily paused its Gemini AI image generation feature after admitting it was offering “inaccuracies” in historical pictures. And OpenAI’s DALL-E image generator was accused by some users of showing signs of bias in 2022, leading the company to announce that it was implementing a new technique so images “accurately reflect the diversity of the world’s population.”

    In 2023, 58% of AI decision makers at companies in Australia, the U.K. and the U.S. expressed concern over the risk of hallucinations in a generative AI deployment, Forrester found. The survey in September of that year included 258 respondents.

    Musk's ambition with Grok 3 is politically and financially driven, expert says

    Experts told CNBC that the Grok incident is reminiscent of China’s DeepSeek, which became an overnight sensation in the U.S. earlier this year due to the quality of its new model and that it was reportedly built at a fraction of the cost of its U.S. rivals.

    Critics have said that DeepSeek censors topics deemed sensitive to the Chinese government. Like China with DeepSeek, Musk appears to be influencing results based on his political views, they say.

    When xAI debuted Grok in November 2023, Musk said it was meant to have “a bit of wit,” “a rebellious streak” and to answer the “spicy questions” that competitors might dodge. In February, xAI blamed an engineer for changes that suppressed Grok responses to user questions about misinformation, keeping Musk and Trump’s names out of replies.

    But Grok’s recent obsession with “white genocide” in South Africa is more extreme.

    Petar Tsankov, CEO of AI model auditing firm LatticeFlow AI, said Grok’s blowup is more surprising than what we saw with DeepSeek because one would “kind of expect that there would be some kind of manipulation from China.”

    Tsankov, whose company is based in Switzerland, said the industry needs more transparency so users can better understand how companies build and train their models and how that influences behavior. He noted efforts by the EU to require more tech companies to provide transparency as part of broader regulations in the region.

    Without a public outcry, “we will never get to deploy safer models,” Tsankov said, and it will be “people who will be paying the price” for putting their trust in the companies developing them.

    Mike Gualtieri, an analyst at Forrester, said the Grok debacle isn’t likely to slow user growth for chatbots, or diminish the investments that companies are pouring into the technology. He said users have a certain level of acceptance for these sorts of occurrences.

    “Whether it’s Grok, ChatGPT or Gemini — everyone expects it now,” Gualtieri said. “They’ve been told how the models hallucinate. There’s an expectation this will happen.”

    Olivia Gambelin, AI ethicist and author of the book Responsible AI, published last year, said that while this type of activity from Grok may not be surprising, it underscores a fundamental flaw in AI models.

    Gambelin said it “shows it’s possible, at least with Grok models, to adjust these general purpose foundational models at will.”

    — CNBC’s Lora Kolodny and Salvador Rodriguez contributed to this report

    WATCH: Elon Musk’s xAI chatbot Grok brings up South African ‘white genocide’ claims.

    Elon Musk’s xAI chatbot Grok brings up South African ‘white genocide’ claims in unrelated responses



    Source link

    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Daniel snow
    • Website

    Related Posts

    MrBeast Expands Into Fintech With Acquisition of Step

    February 10, 2026

    New Anthropic AI Tool Sparks $285 Billion Rout Across Global Markets

    February 7, 2026

    PayPal Dumps CEO in Surprise Shake-Up, Poaches HP’s Top Executive as Replacement

    February 7, 2026
    Leave A Reply Cancel Reply

    Top Posts

    What Happens When a Teen Prodigy Becomes a Power CEO?

    September 15, 2025

    Acun Ilıcalı and Esat Yontunç Named in Expanding Investigation as Authorities Remain Silent

    January 27, 2026

    Queen of the North: How Ravinna Raveenthiran is Redefining Real Estate with Resilience and Compassion

    October 22, 2024

    Redefining leadership and unlocking human potential, Meet Janice Elsley

    June 4, 2025
    Don't Miss

    Swedish krona stock rallies too far, UBS warns of potential rebound

    By Daniel snowFebruary 17, 2026

    Swedish krona stock rallies too far, UBS warns of potential rebound Source link

    An enduring portrait of courage in the C-suite

    February 17, 2026

    USD/PKR faces limited movement as BofA cites balanced outlook

    February 17, 2026

    BofA forecasts EUR/SEK at 10.50

    February 17, 2026
    Stay In Touch
    • Facebook
    • Twitter

    Subscribe to Updates

    Subscribe to our newsletter for the latest leadership tips, exclusive interviews, and expert advice from top CEOs. Simply enter your email below and stay ahead of the curve!.

    About Us
    About Us

    Welcome to CEO Feature, where we dive deep into the exhilarating world of entrepreneurs and CEOs from across the globe! Brace yourself for captivating stories that will blow your mind and leave you inspired.

    Facebook X (Twitter)
    Featured Posts

    The Art of Private Luxury – Vanke Jinyu Huafu by Mr. Tony Tandijono

    September 28, 2018

    5 Simple Tips to Take Care of Larger Air Balloons

    January 4, 2020

    5 Ways Your Passport Can Ruin Your Cool Holiday Trip

    January 5, 2020
    Worldwide News

    5 Flavoursome Pizza Shops you Should Check Out in Toronto

    January 13, 20210

    Save $90 on The HS700E 4K Drone, An Ideal Beginner

    January 14, 20210

    Cryptographers Are Not Happy With How Using the Word ‘Crypto’

    January 14, 20210
    • www.ceofeature.com
    @2025 copyright by ceofeature

    Type above and press Enter to search. Press Esc to cancel.