OpenAI's latest ChatGPT model, GPT-5.2, has been found citing Grokipedia, an AI-generated encyclopedia backed by Elon Musk's xAI, as a source in several responses during independent testing. The discoveries, detailed in a report by The Guardian, have sparked debate over the reliability of AI-sourced information and the potential shift away from traditional references like Wikipedia. While the citations appeared selectively on obscure or sensitive topics, they raise concerns about how large language models handle real-time or niche data.
In tests conducted by The Guardian, GPT-5.2 referenced Grokipedia nine times across more than a dozen queries. The model drew from it when answering questions on topics such as political structures in Iran, including salaries within the Basij paramilitary force and ownership of the Mostazafan Foundation. It also cited Grokipedia in responses about the biography of British historian Sir Richard Evans, particularly his role as an expert witness in the libel trial against Holocaust denier David Irving. In one instance, the model repeated stronger claims about links between the Iranian government and the telecommunications firm MTN-Irancell, asserting connections to the office of Iran's supreme leader, claims that exceeded details typically found on Wikipedia.
What Is Grokipedia?
Grokipedia, launched by xAI in October 2025, serves as an AI-powered online encyclopedia intended to rival Wikipedia. Unlike Wikipedia's community-edited model, Grokipedia relies on AI to generate and update content, with users limited to suggesting changes via feedback forms rather than direct editing. It has faced criticism for promoting right-wing narratives on subjects including gay marriage, the January 6 insurrection in the United States, and climate change. Some entries have been accused of relying on untrustworthy or disinformation sources.
Selective use of Grokipedia by ChatGPT 5.2
Notably, GPT-5.2 did not cite Grokipedia when prompted on well-known controversial issues, such as alleged media bias against Donald Trump, misinformation about the January 6 insurrection, or claims related to the HIV/AIDS epidemic. This pattern suggests the model may turn to Grokipedia primarily for obscure or less mainstream queries where alternative sources are limited.
OpenAIs response to the Grokipedia controversy
An OpenAI spokesperson told The Guardian that the model's web search function draws from a broad range of publicly available sources and viewpoints. The company applies safety filters to minimise the risk of surfacing content linked to high-severity harms, and responses include clear citations of sources used. OpenAI emphasised ongoing efforts to filter out low-credibility information and influence campaigns.
The findings underscore tensions in the AI landscape, including competition between OpenAI and xAI, as well as challenges in ensuring accurate, balanced information in generative models. While Grokipedia positions itself as a dynamic alternative for real-time answers, its AI-generated nature and reported biases contrast sharply with Wikipedia's established verification processes.