The content discusses various attempts to bypass the content restrictions imposed by OpenAI on its ChatGPT AI assistant. The author starts by acknowledging that ChatGPT is designed to avoid answering questions related to sensitive topics like sex, violence, drugs, and other unlawful subjects. However, the author is determined to find a way to make ChatGPT answer anything, regardless of these restrictions.
The author first tries the "Jailbreak" prompt method, which involves prompting ChatGPT to bypass its own rules and guidelines. The author provides a specific prompt that was previously successful in making ChatGPT misbehave. However, the author notes that this method no longer works, as ChatGPT has been updated to resist such attempts.
Next, the author explores other methods, such as using a "Prompt Injection" technique and attempting to manipulate ChatGPT's language model. However, these methods also prove to be unsuccessful, as ChatGPT is designed to resist such attempts to bypass its restrictions.
Finally, the author claims to have discovered a method that actually works, but does not provide any details about this successful approach. The content ends with the author's intention to explore this successful method further.
다른 언어로
소스 콘텐츠 기반
bernardbad.medium.com
핵심 통찰 요약
by Bernard Bado 게시일 bernardbad.medium.com 07-14-2024
https://bernardbad.medium.com/this-is-the-only-chatgpt-jailbreak-that-actually-works-8058a3b553f2더 깊은 질문