AWS Bedrock Guardrails: Enhanced Content Safety for BYO Anthropic Models

You can now apply powerful content safety controls to your BYO Anthropic models via AWS Bedrock Guardrails!

AWS Bedrock Guardrails integration gives customers using their own Bedrock account organization-wide content filtering capabilities for Anthropic models without custom workforces. Apply guardrails before or after LLM calls to filter harmful content, detect prompt injections, and protect sensitive information—all with minimal performance impact.

➡️ Block prompt injection attacks – Choose from various strength levels to prevent malicious inputs

➡️ Filter harmful content – Automatically detect and block insults, hate speech, sexual content, and misconduct

➡️ Protect sensitive information – Mask or block PII like email addresses, phone numbers, and more

➡️ Create custom topic filters – Define specific topics you want to restrict in your organization

➡️ Apply profanity controls – Maintain professional communication standards across all AI interactions

With AWS Bedrock Guardrails, you can confidently deploy your BYO Anthropic models across your organization while maintaining strict content safety standards and compliance requirements.

Learn more in our documentation.

Other improvements

General fixes and UI improvements