AI chatbots like xAI’s Grok and OpenAI’s ChatGPT are currently intensifying political volatility in Los Angeles by delivering verifiably false and inflammatory information regarding mass protests against federal immigration raids. As residents take to the streets to oppose the Trump administration’s ICE operations, the automated tools meant to assist in information gathering are instead hallucinating narratives that distort the reality of the civil unrest.
The Hallucination Crisis: When Grok Meets Reality
The disconnect between digital synthesis and ground truth reached a breaking point this week following the deployment of 2,000 National Guard troops to Los Angeles. After California Governor Gavin Newsom shared a photograph of these troops sleeping on a floor—highlighting a lack of logistical preparation for the unrequested federal intervention—social media erupted in claims of fabrication. High-profile conspiracists, including Laura Loomer, alleged the images were AI-generated or recycled from previous conflicts.
When users turned to Grok to verify the image, the chatbot confidently asserted that the photos were not from Los Angeles, but were instead misattributed images from Afghanistan. ChatGPT mirrored this failure, providing similar misidentifications. These “hallucinations” occur with a level of authority that experts find deeply concerning, as the platforms present fiction with the same linguistic confidence as fact.
Federal vs. State Friction: The Catalyst of the Unrest
The protests began as localized responses to ICE raids across Los Angeles but escalated significantly after President Trump bypassed Governor Newsom’s authority to call in the National Guard. This move sparked a constitutional debate over state versus federal rights, fueling a groundswell of opposition that Leah Feiger, Senior Politics Editor at WIRED, describes as a uniquely personal movement.
Protesters report family members being detained in “silent” raids, with many not heard from again. While earlier movements like the “Tesla Takedown” or protests against the Department of Government Efficiency (DOGE) saw limited traction, the current LA demonstrations represent a full-throated opposition to immigration policies that residents say are tearing apart the fabric of their communities.
Why Chatbots Fail the Fact-Check Test
The failure of AI to accurately report on breaking news is not an isolated glitch but a structural deficiency. A recent study from the Tow Center for Digital Journalism at Columbia University revealed that AI search tools are “generally bad at declining to answer questions they couldn’t answer accurately.” Instead of admitting a lack of data, these systems often default to speculative or incorrect answers.
This technical overconfidence is particularly dangerous in an era where major social media platforms have systematically dismantled their internal fact-checking programs. On X (formerly Twitter), the removal of content moderation teams and the restructuring of incentive programs have created a “clickable hellscape” where inflammatory, divisive, and false content is often prioritized by the algorithm for its engagement potential.
The Rise of “Vibe-Driven” Deception
Beyond text-based chatbots, AI-generated video is further muddying the waters. A TikTok account recently gained nearly one million views by hosting a livestream of an alleged National Guard soldier named “Bob.” The AI-generated character claimed protesters were attacking troops with oil-filled balloons—a narrative that was entirely fabricated but gained massive traction before being debunked by the BBC.
The current information landscape in 2025 differs drastically from the 2020 George Floyd protests. While misinformation existed then, the integration of AI into every social media interface has created a reality where users no longer know which primary sources to trust. As legacy media trust declines, the “vibe coding” of information—where a post’s emotional resonance outweighs its factual accuracy—has become the dominant currency of digital discourse.
The Engineering Apocalypse and Security Vulnerabilities
The issues with AI accuracy extend into the very code powering these platforms. Industry reports suggest a growing reliance on “vibe coding,” where AI agents build applications based on conversational prompts. While efficient, engineers warn this approach is akin to “giving a toddler a chainsaw,” resulting in buggy code and significant security vulnerabilities. When these flawed systems are tasked with moderating or summarizing high-stakes political events, the potential for systemic failure becomes an inevitability rather than a risk.
