Skip to content

Guardrails API

Endpoint for evaluating content against guardrail rules.

Evaluate

POST /api/client/v1/guardrails/evaluate

Request

json
{
  "guardrail_key": "pii-checker",
  "text": "My email is john@example.com and my phone is 555-0100",
  "target": "input"
}

Parameters

FieldTypeRequiredDescription
guardrail_keystringYesKey of the guardrail to evaluate
textstringYesContent to evaluate
targetstringNoinput, output, or both (default: input)

Response

json
{
  "passed": false,
  "guardrail_key": "pii-checker",
  "guardrail_name": "PII Checker",
  "action": "flag",
  "findings": [
    { "category": "email", "message": "Email address detected", "block": false },
    { "category": "phone", "message": "Phone number detected", "block": false }
  ],
  "message": null
}

Response Fields

FieldDescription
passedtrue if no findings triggered, false otherwise
guardrail_keyKey of the evaluated guardrail
guardrail_nameDisplay name
actionConfigured action: block, flag, or redact
findingsArray of detected issues
messageOptional message for blocked content

Guardrail Types

TypeEvaluation Method
PII DetectionRegex-based pattern matching (15 categories)
Content ModerationCategory-based content evaluation
Prompt ShieldPrompt injection detection
Custom PromptLLM-based evaluation with custom rules

Inference Integration

Guardrails can be attached to models and evaluated automatically during chat completions. When a guardrail blocks a request, the chat API returns:

json
{
  "error": {
    "type": "guardrail_block",
    "guardrail_key": "pii-checker",
    "action": "block",
    "findings": [...]
  }
}

Errors

StatusDescription
400Missing guardrail_key or text
401Invalid API token
404Guardrail not found

Community edition is AGPL-3.0. Commercial licensing and support are available separately.