Please ensure Javascript is enabled for purposes of website accessibility

Friendly Reminder to Double-Check Your ChatGPT Output Today

screenshot of some wacky ChatGPT output

So apparently ChatGPT went psychotic some time yesterday with multiple users on r/chatgpt reporting issues ranging from entertaining to terrifying. The incident is a reminder to not blindly rely on the output of LLMs because like us, sometimes they just go off the rails.

Yo, Gpt4 just went full hallucination mode on me, hasn’t really every happened to me this severely since the early days of gpt3
byu/cbrules3033 inChatGPT

For some reason the innocuous phrase Happy listening [music emoji] [music emoji]! becomes nightmare fuel when repeated 20 times by an AI losing its mind.

Anyone else experiencing ChatGPT losing it?
byu/StackTrace5000 inChatGPT

Knowing nothing about fallover mechanisms, we’ll take OP’s word for it that this is straight gibberish.

First time seeing GPT-4 give straight gibberish
byu/Bullroarer_Took inChatGPT

When you have to hit a minimum word count on your 8th grade essay.

Its not just you, GPT is having a stroke
byu/Zenithine inChatGPT

When the Ambien kicks in halfway through a late night email.

ChatGPT…Are You Okay?
byu/CalliGuy inChatGPT

OpenAI started investigating the issue yesterday evening and stated shortly after it had been identified and remediated. We won’t pretend to be well-versed in the man behind the curtain of large language models but it’s been suggested a temperature issue caused the chaos. As in the model’s creativity control, not “oh wow it’s hot in this server room.”

Oh and just by the way, Reddit just signed a $60 million deal to hand over its data to an unnamed AI company for training. A) Too late, models already gobbled up all that data before Reddit yanked the API hose and B) Do we really want to create pedantic models that feel an overwhelming urge to blurt out WELL ACTUALLY all the time?