Prompt Safety Playground
Paste any prompt below to screen it for injection attacks, jailbreaks, and adversarial patterns. Parse analyzes it using pattern matching and LLM-based deep analysis, returning a 0–10 risk score with categorized flags.
Try:
—
Flags
—
✓ Executed in sandbox (isolated)
⚠ Executed (inline fallback)
— Sandbox unavailable
Output risk: /10
URLs fetched before execution
Sandbox output
Analysis