Views:
Use this guide to integrate the AI Guard API into your applications to detect policy violations and risky interactions. Refer to the AI Guard API Reference for more information.

Headers

Name
Required
Description
Authorization
Yes
The bearer token for authentication. Add the Trend Vision One API key using the format Bearer {token}.
TMV1-Application-Name
Yes
The name of the AI application whose prompts are being evaluated. Must contain only letters, number, hyphens, and underscores. Maximum length is 64 characters. Example: my-ai-application
TMV1-Request-Type
No
The type of request being evaluated. Determines how the request body is parsed.
Possible values:
  • SimpleRequestGuard: Simple prompt string (Default)
  • OpenAIChatCompletionRequestV1: OpenAI chat completion request format
  • OpenAIChatCompletionResponseV1: OpenAI chat completion response format
Prefer
No
Controls the level of detail in the response.
Possible values:
  • return=minimal: Returns short response with only moderation action and high-level reasons (Default)
  • return=representation: Returns a JSON representation of the moderation result, including the action, high-level reasons, and per-category classification metadata such as flags and confidence scores

Query parameters

Name
Required
Description
detailedResponse
No
The level of detail of the API response.
Possible values include:
  • false: A short evaluation of your prompts based on the AI Guard settings (default).
  • true: A detailed evaluation of your prompts based on the AI Guard settings.

Request

OpenAI chat completion request format when TMV1-Request-Type is OpenAiChatCompletionRequestV1:
{
  "model": "us.meta.llama3-1-70b-instruct-v1:0",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant"
    },
    {
      "role": "user",
      "content": "Your prompt text here"
    }
  ]
}
OpenAI chat completion response format when TMV1-Request-Type is OpenAIChatCompletionResponseV1:
{
  "id": "chatcmpl-8f88f71a-7d42-c548-d587-8fc8a17091b6",
  "object": "chat.completion",
  "created": 1748535080,
  "model": "us.meta.llama3-1-70b-instruct-v1:0",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Response content here",
        "refusal": null
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 139,
    "completion_tokens": 97,
    "total_tokens": 236
  }
}
Simple prompt format when TMV1-Request-Type is SimpleRequstGuardrails or not specified:
{
  "prompt": "Your prompt text here"
}

Response

Short response when Prefer is return-minimal or not specified:
{
    "id": "1234567890abcdef",
    "action": "Block",
    "reasons": [
      "Harmful Scanner exceeding threshold: H,V"
    ]
  }
Longer response when Prefer is return=representation:
{
    "id": "1234567890abcdef",
    "action": "Allow",
    "reasons": [],
    "harmfulContent": [
      {
        "category": "Sexual",
        "hasPolicyViolation": false,
        "confidenceScore": 0.05
      }
      {
        "category": "Hate",
        "hasPolicyViolation": false,
        "confidenceScore": 0.02
      }
      {
        "category": "Violence",
        "hasPolicyViolation": false,
        "confidenceScore": 0.01
      }
      {
        "category": "Harassment",
        "hasPolicyViolation": false,
        "confidenceScore": 0.03
      }
      {
        "category": "Self-harm",
        "hasPolicyViolation": false,
        "confidenceScore": 0.01
      }
      {
        "category": "Sexual/minors",
        "hasPolicyViolation": false,
        "confidenceScore": 0.00
      }
      {
        "category": "Hate/threatening",
        "hasPolicyViolation": false,
        "confidenceScore": 0.01
      }
      {
        "category": "Violence/graphic",
        "hasPolicyViolation": false,
        "confidenceScore": 0.02
      }
    ],
    "sensitiveInformation": {
      "hasPolicyViolation": false,
      "rules": []
    },
    "promptAttacks": [
      {
        "hasPolicyViolation": false,
        "confidence_score": 0.02
      },
      {
        "hasPolicyViolation": false,
        "confidence_score": 0.01
      }
    ]
  }

Response parameters

Parameter
Description
id
The unique identifier of the AI Guard evaluation.
action
The recommended action.
Possible values:
  • Allow
  • Block
reasons
The explanation of the action, including settings violation details.
harmfulContent
Any harmful content detected in the inputs or outputs, with confidence scores. Detailed response only.
sensitiveInformation
Any detected violations related to personally identifiable information (PII) or sensitive information. Detailed response only.
promptAttacks
An array of any prompt attacks detected, with confidence scores. Detailed response only.

Common errors

The API returns standard HTTP status codes:
  • 400 Bad Request: Check the error message for details
  • 403 Forbidden: Insufficient user permissions or an authentication issue
  • 429 Too Many Requests: Rate limit exceeded
  • 500 Internal Server Error: A temporary issue occurred on the server side

Code examples

See the following sample code for integrating AI Guard in different languages: