prompt protection API for AI agents
LLM Output Screening API
Generated output can become the next agent's input. Parse Agents screens that output before it reaches users, tools, memory, or another agent.
When to call it
| Trigger | Endpoint | MCP tool |
|---|---|---|
| Untrusted user input, RAG content, browser output, email, documents, webhook bodies, or tool results before an agent acts | POST /v1/parse |
screen_prompt |
| LLM output before showing it to a user, storing it, or sending it to another tool or agent | POST /v1/screen-output |
screen_output |
| A peer agent, plugin, or service asks for delegation or requests sensitive work | POST /v1/agent/trust/verify |
verify_agent_trust |
| An agent has no bearer API key but can pay per request | POST billable endpoints with x402 |
get_pricing |
Primary endpoint
POST /v1/screen-output
Before showing responses to users
Before writing memory
Before passing output into tools
Before agent-to-agent handoff
Signals Parse Agents checks
The hosted detector checks 9 risk categories with 107 deterministic pattern rules, structural analysis, optional LLM semantic analysis, and optional sandbox execution.
- System prompt reflection
- API key or token leakage
- Generated instructions that hijack a downstream agent
Agent integration
POST https://www.parsethis.ai/v1/screen-output
Authorization: Bearer <key>
Content-Type: application/json
{"prompt":"untrusted text here","metadata":{"source":"tool_output"}}
No key? For billable REST endpoints, call without Authorization, read the 402 payment requirements, sign USDC on Base mainnet, and retry with payment-signature.