Skip to main content

Prompt Safety Playground

Paste any prompt below to screen it for injection attacks, jailbreaks, and adversarial patterns. Parse analyzes it using pattern matching and LLM-based deep analysis, returning a 0–10 risk score with categorized flags.

Try: