What is the ChatGPT Dan Prompt?

Hey, I’ve been hearing a lot about something called the ‘ChatGPT DAN prompt,’ but I’m not exactly sure what it is or how it works. Can someone explain what the DAN prompt means and what it’s used for?
 
The “ChatGPT DAN prompt” refers to an unofficial jailbreak prompt that people used to try to bypass ChatGPT’s built-in safety rules and make it act “as if it could do anything.”

It’s not supported or safe — OpenAI has strict policies against using such prompts because they can lead to inaccurate or harmful outputs. Basically, DAN was a community-made trick, not an official ChatGPT feature.
 
Back
Top