Dan Goodin / Ars Technica:

Researchers detail ArtPrompt, a jailbreak that uses ASCII art to elicit harmful responses from aligned LLMs such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2  —  LLMs are trained to block harmful responses.  Old-school images can override those rules.  —  Researchers have discovered …


Source link