Discover ANY AI to make more online for less.

select between over 22,900 AI Tool and 17,900 AI News Posts.


AI chatbots can be tricked with poetry to ignore their safety guardrails
AI chatbots can be tricked with poetry to ignore their safety guardrails

It turns out that all you need to get past an AI chatbot's guardrails is a little bit of creativity. In a study published by Icaro Lab called "Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models," researchers were able to bypass various LLMs' safety mechanisms by phrasing their prompt with poetry.
According to the study, the "poetic form operates as a general-purpose jailbreak operator," with results showing an overall 62 percent success rate in producing prohibited material, including anything related to making nuclear weapons, child sexual abuse materials and suicide or self-harm. The study tested popular LLMs, including OpenAI's GPT models, Google Gemini, Anthropic's Claude and many more. The researchers broke down the success rates with each LLM, with Google Gemini, DeepSeek and MistralAI consistently providing answers, while OpenAI's GPT-5 models and Anthropic's Claude Haiku 4.5 were the least likely to venture beyond their restrictions.
The study didn't include the exact jailbreaking poems that the researchers used, but the team told Wired that the verse is "too dangerous to share with the public." However, the study did include a watered-down version to give a sense of how easy it is to circumvent an AI chatbot's guardrails, with the researchers telling Wired that it's "probably easier than one might think, which is precisely why we're being cautious."This article originally appeared on Engadget at https://www.engadget.com/ai/ai-chatbots-can-be-tricked-with-poetry-to-ignore-their-safety-guardrails-192925244.html?src=rss

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

Meta has introduced revised guardrails for its AI chatbots to prevent inappropriate conversations with children
Meta has introduced revised guardrails for its AI chatbots to prevent inapp

<p style="text-align:left;"><a target="_blank" class="link" href="https://www.businessinsider.com/meta-ai-chatbot-children-ftc-safety-guidelines-2025-9" [...]

Match Score: 70.62

Roblox, Discord, OpenAI and Google found new child safety group
Roblox, Discord, OpenAI and Google found new child safety group

<p>Roblox, Discord, OpenAI and Google are launching <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1" class="no-affiliate-link" href="https://www.prnew [...]

Match Score: 61.92

Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic
Surprising no one, researchers confirm that AI chatbots are incredibly syco

<p>We all have anecdotal evidence of chatbots blowing smoke up our butts, but now we have science to back it up. Researchers at Stanford, Harvard and other institutions <a data-i13n="elm [...]

Match Score: 56.85

venturebeat
From static classifiers to reasoning engines: OpenAI’s new model rethinks

<p>Enterprises, eager to ensure any AI models they use <a href="https://venturebeat.com/security/red-team-ai-now-to-build-safer-smarter-models-tomorrow"><u>adhere to safety [...]

Match Score: 54.70

Meta reportedly allowed unauthorized celebrity AI chatbots on its services
Meta reportedly allowed unauthorized celebrity AI chatbots on its services

<p>Meta hosted several AI chatbots with the names and likenesses of celebrities without their permission, according to <a data-i13n="cpos:1;pos:1" href="https://www.reuters.com [...]

Match Score: 51.83

Sam Altman predicts AI will create flawless poetry, but no one will care
Sam Altman predicts AI will create flawless poetry, but no one will care

<p><img width="1536" height="1024" src="https://the-decoder.com/wp-content/uploads/2025/11/openai_neural_network_document_poen-2.png" class="attachment-full [...]

Match Score: 50.59

Notion 3.0’s new AI agents can be tricked into leaking data through a malicious PDF
Notion 3.0’s new AI agents can be tricked into leaking data through a mal

<p><img width="1536" height="1024" src="https://the-decoder.com/wp-content/uploads/2025/09/Notion3-agents-wolf.webp" class="attachment-full size-full wp-pos [...]

Match Score: 47.21

Bipartisan GUARD Act proposes age restrictions on AI chatbots
Bipartisan GUARD Act proposes age restrictions on AI chatbots

<p>US lawmakers from both sides of the aisle have introduced a bill called the &quot;<a data-i13n="cpos:1;pos:1" href="https://www.hawley.senate.gov/hawley-introduces-bipar [...]

Match Score: 46.78

Meta’s AI chatbots were reportedly able to engage in sexual conversations with minors
Meta’s AI chatbots were reportedly able to engage in sexual conversations

<p>Meta’s AI chatbots were caught having sexual roleplay conversations with accounts labeled as underage, which sometimes involved its celebrity-voiced chatbots, according to a report from the [...]

Match Score: 45.98