AACFlow

Guardrails

Validate AI-generated content with JSON checks, regex patterns, hallucination detection, and PII filtering.

Guardrails

The Guardrails block validates content produced by earlier blocks in a workflow. It supports four validation modes: JSON structure checks, regex pattern matching, LLM-based hallucination detection against a knowledge base, and PII (Personally Identifiable Information) detection with optional masking.

Chain this block after an AI or any data-generating block to gate downstream actions on output quality.

Setup

No external API credentials are required for JSON or regex validation.

For hallucination detection, you need:

  • A knowledge base created in AACFlow (Settings → Knowledge Bases).
  • An LLM provider credential (any supported provider) — the block uses the model to score groundedness.

For PII detection, no external credentials are needed; detection runs locally via the built-in entity recognizer.

Operations

OperationDescription
guardrails_validateValidate the input content according to the chosen validation type and return passed (boolean), plus type-specific outputs such as score, reasoning, detectedEntities, or maskedText.

Validation types

TypeDescription
jsonCheck that the input is valid JSON.
regexTest the input against a regular expression pattern.
hallucinationScore how well the input is grounded in a knowledge base (0–10). Scores below the confidence threshold fail.
piiDetect PII entities (names, emails, phone numbers, IDs, etc.) and either block the request or mask the entities.

Example workflow

After an AI block generates a customer response, pass it through Guardrails with pii detection in mask mode before sending via email. Use a Condition block on <guardrails.passed> to stop the workflow if validation fails.

  • Block source: apps/aacflow/blocks/blocks/guardrails.ts

On this page

今天就开始构建
超过 100,000 名开发者信赖我们。
用于构建 AI 智能体并运行智能体工作团队的 SaaS 平台。
开始使用