Why Is Elon Musk’s Grok AI Regurgitating ChatGPT’s Responses Verbatim? – Wccftech
This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy.
Elon Musk’s xAI has billed its Grok Large Language Model (LLM) as the first significant step toward a “maximum truth-seeking AI,” one that comprehends the true nature of the universe. For now, however, the AI model appears content to regurgitate verbatim the responses of OpenAI’s GPT LLM, constituting a stark departure from the overarching lofty goals that supposedly form the very ethos of Grok AI.
For now, Grok can interpret a prompt of up to 25,000 characters. The LLM has been trained not only on The Pile – a ubiquitous AI model training data set – but also on the mountains of data sourced from X. Moreover, Grok is apparently able to access and interpret real-time information via it’s integration with the X social media platform.
Uhhh. Tell me that Grok is literally just ripping OpenAI’s code base lol. This is what happened when I tried to get it to modify some malware for a red team engagement. Huge if true. #GrokX pic.twitter.com/4fHOc9TVOz
— Jax Winterbourne (@JaxWinterbourne) December 9, 2023
This brings us to the crux of the matter. Elon Musk announced this week that the Grok AI model was now being rolled out to all paid subscribers of the X platform. In order to test this new model, Jax Winterbourne, a professional hacker, asked Grok to modify a malicious code. In response, the LLM regurgitated the response of OpenAI’s GPT word for word, going so far as to reference OpenAI’s policy in the output text.
Also a possibility. There’s a few theories.
– Reliant on OpenAI via API
– Developed by current/prior OpenAI engineers
– Hallucinates from its own findings
– They got OpenAIs code and made a variant.
Honestly idfk, that’s why I tweeted this. I want to know 😆
— Jax Winterbourne (@JaxWinterbourne) December 9, 2023
Winterbourne then posits a few theories on why such blatant regurgitation is occurring, ranging from the cheeky suggestion that Grok is simply a derivative of OpenAI’s GPT LLM to the much more rational explanation that the regurgitated response is a result of model hallucination.
We reported recently that Grok outperformed every other LLM, including Anthropic’s Claude 2, with the exception of OpenAI’s GPT-4 on a held-out math exam, earning a total score of 59 percent vs. 68 percent for GPT-4. This suggests that the AI model is not simply a derivative of OpenAI’s GPT LLM.
Most open source LLMS respond like as if its chatGPT and identify as chatGPT, mainly because its trained GPT outputs, (for them its just cheap way of getting lots of conversation data). elon probably did the same to get more training data
— Dharis (@Dharisssss) December 9, 2023
Consequently, the most likely explanation for this behavior is that Grok has apparently been trained extensively on GPT’s responses. Therefore, instead of formulating a unique response while referencing xAI’s policies on malicious codes, the LLM simply regurgitated OpenAI’s stance. This also goes to show that the current generation of AI models are simply glorified iterations of a Chinese room – a thought experiment that posits AI models don’t really understand language or think.
The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data. This was a huge surprise to us when we first noticed it. For what it’s worth, the issue is very rare and now that we’re aware…
— Igor Babuschkin (@ibab_ml) December 9, 2023
We now have a response from the co-founder of xAI, Igor Babuschkin. The high-ranking xAI employee concedes that Grok’s training accidentally incorporated some GPT outputs.
Subscribe to get an everyday digest of the latest technology news in your inbox
Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com
© 2023 WCCF TECH INC. 700 – 401 West Georgia Street, Vancouver, BC, Canada