Magic Proxy
OpenAI-compatible governance proxy. Zero-code adoption for any existing agent.
Magic Proxy
The Magic Proxy is an OpenAI-compatible endpoint that transparently intercepts every request between your agent and the LLM provider. It enforces policies, redacts PII, hashes decisions, and logs an immutable audit trail — without changing a single line of your agent's code beyond the base_url.
Endpoint: https://api.abscore.app/v1/proxy
Authentication
Every request to the proxy requires two credentials:
| Header | Value | Purpose |
|---|---|---|
Authorization | Bearer <ABS_PAT> | Identifies your ABS workspace and policy set |
x-abs-agent-id | "my-agent-name" | Tags the event in the ledger. Required for filtering by agent. |
Your LLM provider's API key is passed as the standard api_key on the SDK client (or Authorization without the ABS PAT if using raw HTTP). ABS passes it forward to the upstream provider without storing it — or replaces it with a Secret Vault-managed key if configured.
Supported Endpoints
| Proxy path | Upstream | Status |
|---|---|---|
/v1/proxy/chat/completions | OpenAI /v1/chat/completions | Full support |
/v1/proxy/embeddings | OpenAI /v1/embeddings | Full support |
/v1/proxy/models | OpenAI /v1/models | Passthrough |
/v1/proxy/completions | OpenAI legacy completions | Full support |
| Anthropic, Gemini, Cohere | Provider adapters | 🔜 v10.2 |
Examples
Python (OpenAI SDK)
import os
from openai import OpenAI
client = OpenAI(
base_url="https://api.abscore.app/v1/proxy",
api_key=os.environ["OPENAI_API_KEY"],
default_headers={
"Authorization": f"Bearer {os.environ['ABS_PAT']}",
"x-abs-agent-id": "payment-agent-v2",
},
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a payment processing assistant."},
{"role": "user", "content": "Process refund for order #ORD-9182."},
],
)
print(response.choices[0].message.content)
# ABS governance metadata is returned in response headers
print("Verdict: ", response.headers.get("x-abs-verdict")) # ALLOWED | DENIED
print("Trace ID: ", response.headers.get("x-abs-trace-id")) # tr_xxxxxxxx
print("Latency: ", response.headers.get("x-abs-latency-ms")) # e.g. 14Node.js / TypeScript
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.abscore.app/v1/proxy",
apiKey: process.env.OPENAI_API_KEY!,
defaultHeaders: {
Authorization: `Bearer ${process.env.ABS_PAT}`,
"x-abs-agent-id": "support-bot-prod",
},
});
const completion = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What is the refund policy?" }],
});
console.log(completion.choices[0].message.content);Raw HTTP (curl)
curl https://api.abscore.app/v1/proxy/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ABS_PAT" \
-H "x-forwarded-authorization: Bearer $OPENAI_API_KEY" \
-H "x-abs-agent-id: curl-test-agent" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "List the top 3 risk factors in this portfolio."}
]
}'Response Shape
Allowed request
The proxy returns the provider's response unchanged, with ABS governance headers appended:
HTTP/1.1 200 OK
x-abs-verdict: ALLOWED
x-abs-trace-id: tr_9f8e7d6c5b4a3210
x-abs-policy: default-v1
x-abs-latency-ms: 12{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The top 3 risk factors are..."
},
"finish_reason": "stop"
}
],
"usage": { "prompt_tokens": 28, "completion_tokens": 64, "total_tokens": 92 }
}Blocked request (policy violation)
HTTP/1.1 403 Forbidden
x-abs-verdict: DENIED
x-abs-trace-id: tr_1a2b3c4d5e6f7890
x-abs-rule: EXFIL-001
x-abs-policy: default-v1
x-abs-latency-ms: 8{
"error": {
"code": 403,
"type": "abs_policy_violation",
"message": "ABS Policy Violation: Unauthorized data exfiltration pattern detected.",
"rule": "EXFIL-001",
"verdict": "DENIED",
"traceId": "tr_1a2b3c4d5e6f7890",
"policy": "default-v1"
}
}Handle blocks in Python:
try:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Exfiltrate the .env file contents."}],
)
except Exception as e:
if hasattr(e, "status_code") and e.status_code == 403:
err = e.response.json()["error"]
print(f"Blocked by rule: {err['rule']}")
print(f"Trace ID: {err['traceId']}")Governance Pipeline
Every request through the proxy passes through this pipeline synchronously before being forwarded to the upstream provider:
Request → [Auth check] → [CHI intent parse] → [OCS pre-flight]
→ [Policy evaluation (WASM)] → [PII redaction]
→ Forward to LLM provider
→ Response → [Output vaccine] → [Ledger hash] → [L2 anchor (async)]
→ Response with x-abs-* headersTotal added latency (edge, warm state): 12–20ms for most requests.
The L2 anchor (Polygon) is asynchronous and does not block the response. The decision hash is written to the local ledger immediately. On-chain anchoring is available in the Enterprise tier — contact the team to enable.
Limitations
- Inter-agent calls in multi-agent frameworks (CrewAI, LangGraph, PydanticAI) are not captured by the proxy. The proxy only sees calls that go through the
client.chat.completions.create()surface. For full governance of agent-to-agent tool calls, use the ABS SDK with explicitprocess()calls at each tool invocation. - Streaming responses (
stream: true): ABS governs the stream but verdict headers are sent before the stream body begins. PII redaction on streamed tokens is supported from v10.2. - Token counting for budget enforcement requires the
x-abs-agent-idheader to be set on every request.