Guardrails
KI-generierten Inhalt mit JSON-Prüfungen, Regex-Mustern, Halluzinationserkennung und PII-Filterung validieren.
Guardrails
The Guardrails block validates content produced by earlier blocks in a workflow. It supports four validation modes: JSON structure checks, regex pattern matching, LLM-based hallucination detection against a knowledge base, and PII (Personally Identifiable Information) detection with optional masking.
Chain this block after an AI or any data-generating block to gate downstream actions on output quality.
Einrichtung
No external API credentials are required for JSON or regex validation.
For hallucination detection, you need:
- A knowledge base created in AACFlow (Settings → Knowledge Bases).
- An LLM provider credential (any supported provider) — the block uses the model to score groundedness.
For PII detection, no external credentials are needed; detection runs locally via the built-in entity recognizer.
Operationen
| Operation | Beschreibung |
|---|---|
guardrails_validate | Validate the input content according to the chosen validation type and return passed (boolean), plus type-specific outputs such as score, reasoning, detectedEntities, or maskedText. |
Validierungstypen
| Typ | Beschreibung |
|---|---|
json | Check that the input is valid JSON. |
regex | Test the input against a regular expression pattern. |
hallucination | Score how well the input is grounded in a knowledge base (0–10). Scores below the confidence threshold fail. |
pii | Detect PII entities (names, emails, phone numbers, IDs, etc.) and either block the request or mask the entities. |
Beispiel-Workflow
After an AI block generates a customer response, pass it through Guardrails with pii detection in mask mode before sending via email. Use a Condition block on <guardrails.passed> to stop the workflow if validation fails.
Links
- Block source: apps/aacflow/blocks/blocks/guardrails.ts

