Scan user inputs for PII or policy violations before sending them to your main LLM.
Employees or users might inadvertently send sensitive data (PII) or inappropriate content to your AI models.
Create a 'Guardrail Route' using a fast, cheap model (like Haiku or GPT-3.5) to scan the input. If it passes, your app proceeds to the main execution.
Define a prompt that strictly checks for your specific policy violations. Ask for a JSON response for easy parsing.
In your application code, call this Guardrail Route first. Only if `safe: true` is returned, proceed to call your main generation Route.