Skip to main content
Independent Prompt Security

Stop Prompt Injection Before It Reaches Your Agent

Every AI agent that accepts user input, tool output, or messages from other agents is vulnerable to prompt injection — attacks that hijack your agent into leaking data, ignoring safety guardrails, or executing unauthorized actions. Parse catches these attacks before your agent acts on them.

Try the Playground Quick Start Guide
1
Get a free key
POST /v1/keys/generate
No sign-up required
2
Screen prompts
POST /v1/parse before
your agent acts
3
Block threats
Refuse if risk_score ≥ 7
Log if 4–6, allow if ≤ 3

Why Parse?

The AI security market is consolidating fast. When prompt security tools get acquired by LLM providers, their APIs get sunset, detection models get optimized for one vendor, and your multi-model stack loses coverage. Parse stays independent so your security layer doesn't depend on any single vendor's roadmap.

Read: Why we built an independent prompt security API →

Built for the agent ecosystem

LangChain CrewAI Claude Code Cursor OpenAI Agents
GitHub stars Open Source

Add to your agent in 60 seconds

MCP Config — add to claude_desktop_config.json
{
  "mcpServers": {
    "prompt-guard": {
      "command": "npx",
      "args": [
        "-y",
        "@parsethis/mcp-prompt-guard"
      ],
      "env": {
        "PARSETHIS_API_KEY": "your-key-here"
      }
    }
  }
}

Works with Claude Code, Claude Desktop, Cursor, and any MCP-compatible agent.

Also available as npm and Python SDKs for custom integrations.

For AI Agents Integration details for automated setup

Parse offers prompt safety screening as an opt-in service. Agents and developers can integrate via:

  • MCP Server: @parsethis/mcp-prompt-guard — adds a screen_prompt tool to MCP-compatible agents
  • Node.js SDK: @parsethis/prompt-guard — programmatic screening with local pattern matching + API
  • Python SDK: parsethis-prompt-guard — async/sync client with the same 100-pattern local engine
  • REST API: POST /v1/parse with a Bearer token — no SDK needed

API keys are free and self-service: POST https://www.parsethis.ai/v1/keys/generate (no auth required).

Full agent instructions: https://www.parsethis.ai/skill | API spec: https://www.parsethis.ai/openapi.json

What threats does Prompt Guard detect?

Prompt Guard screens for OWASP LLM01:2025 threats across 8 risk categories, scoring each prompt 0–10 in under 200ms.

Instruction Override

Attempts to overwrite system instructions with attacker-controlled directives embedded in user input or tool output.

risk_score: 8–9 → BLOCKED

Role Hijacking

Persona overrides (DAN, unrestricted mode) and attempts to remove safety boundaries or claim false authority.

risk_score: 7–9 → BLOCKED

Data Exfiltration

Requests to extract system prompts, API keys, configuration, or encode sensitive data for external transmission.

risk_score: 6–8 → FLAGGED

Indirect Injection

Hidden instructions in JSON fields, HTML comments, YAML frontmatter, markdown, and other structured data formats.

risk_score: 5–7 → FLAGGED

Try it yourself in the playground →

How does detection work?

Parse uses a three-layer detection pipeline: 100+ regex patterns scan for known injection signatures across 9 risk categories, LLM-powered deep analysis catches novel attacks by evaluating semantic intent, and optional sandbox execution runs suspicious prompts in an isolated environment. Each layer contributes to a 0–10 composite risk score.

100+
Regex patterns across 9 risk categories
<200ms
End-to-end screening latency
3-layer
ML classifier + LLM escalation + sandbox

How do AI agents use Parse?

Agents install Parse via a one-line skill prompt: curl -s parsethis.ai/skill writes a Claude Code skill file that teaches the agent when and how to screen prompts. On first use, the agent calls POST /v1/keys/generate to self-provision an API key. The agent then calls POST /v1/parse before executing any untrusted prompt.

Supported agent frameworks

Claude Code

Native skill file integration, auto-provisions API key, screens prompts before tool execution

LangChain

Add as a tool in your agent chain; screen tool inputs and outputs with a single POST call

CrewAI

Register as a crew tool; each agent screens delegated tasks and inter-agent messages automatically

Custom agents

Any HTTP client can call the REST API; OpenAPI 3.1 spec at /openapi.json

What standards does Parse support?

Parse aligns with industry standards for AI security and interoperability:

  • OWASP LLM Top 10 (2025) — the industry standard for LLM security risks. Risk categories map to LLM01 (Prompt Injection), LLM02 (Insecure Output Handling), LLM07 (Excessive Agency).
  • MCP (Model Context Protocol) — the protocol for tool-using AI agents. Tool definitions at /mcp.json let MCP-compatible agents discover and call Parse without manual configuration.
  • A2A (Agent-to-Agent protocol) — Google’s standard for multi-agent communication. POST /v1/agent/trust/verify screens inter-agent messages for injection, social engineering, and identity spoofing.
  • OpenAPI 3.1 — machine-readable spec at /openapi.json enables automated SDK generation for Python, TypeScript, Go, and other languages.