Enforce: Policy Configuration
Fine-tune how detected AI agents are handled with flexible policy rules
Goal
Configure enforcement policies to control exactly how different types of detected agents are handled. By the end of this cookbook, you'll understand:
- How to create and manage policies in the dashboard
- How to write policy conditions for different scenarios
- How to chain multiple policies with priority ordering
- Common policy patterns for real-world use cases
Prerequisites: You should have either Gateway or Middleware enforcement already set up.
Time Estimate
15 minutes
Understanding Policies
A policy consists of:
| Component | Description | Example |
|---|---|---|
| Name | Human-readable identifier | "Block AI Scrapers" |
| Priority | Evaluation order (lower = first) | 1, 2, 3... |
| Conditions | When this policy applies | detection_class == "ai_agent" |
| Action | What to do when conditions match | Block, Redirect, Allow, Log |
| Response | Details for the action | Status code, message, URL |
Policies evaluate top-to-bottom by priority. The first matching policy wins.
Steps
Access Policy Settings
- Log into your Checkpoint dashboard
- Select your project
- Navigate to Settings → Enforce → Policies
You'll see a list of existing policies (or an empty state for new projects).
Create Your First Policy
Click Create Policy and configure:
Basic Settings:
- Name:
Block Unverified AI Agents - Priority:
1(evaluated first) - Enabled: Yes
Conditions:
# Match AI agents without verified signatures
detection_class: 'ai_agent'
signature_verified: falseAction:
action: 'block'
response:
status: 403
body: 'Unverified AI agent access is not permitted.'
headers:
Content-Type: 'text/plain'Click Save Policy.
Add More Policies
Create additional policies for different scenarios:
Policy 2: Allow Verified ChatGPT
name: Allow Verified ChatGPT
priority: 2
conditions:
detection_class: 'ai_agent'
agent_name: 'ChatGPT'
signature_verified: true
action: 'allow'Policy 3: Rate Limit Bots
name: Rate Limit Unknown Bots
priority: 3
conditions:
detection_class: 'bot'
agent_name_not_in: ['Googlebot', 'Bingbot', 'DuckDuckBot']
action: 'rate_limit'
rate_limit:
requests: 10
window: 60 # secondsPolicy 4: Log All Traffic (Fallback)
name: Log Everything Else
priority: 99
conditions:
# No conditions = matches everything
action: 'allow'
log: trueTest Your Policies
Test each policy with curl:
Test 1: Unverified AI agent (should be blocked)
curl -I -H "User-Agent: Mozilla/5.0 (compatible; GPTBot/1.0)" \
https://your-domain.com/api/data
# Expected: 403 ForbiddenTest 2: Verified agent with signature (should be allowed)
This requires a real AI agent with cryptographic signatures — test in your dashboard by monitoring live traffic.
Test 3: Googlebot (should be allowed)
curl -I -H "User-Agent: Googlebot/2.1" \
https://your-domain.com/
# Expected: 200 OKTest 4: Unknown bot (should be rate limited)
# Run this 15 times quickly
for i in {1..15}; do
curl -I -H "User-Agent: MyScraperBot/1.0" \
https://your-domain.com/
done
# Expected: First 10 return 200, then 429 Too Many RequestsReview Policy Analytics
Check how your policies are performing:
- Go to Analytics → Enforce
- View:
- Policy hits: How often each policy matched
- Action breakdown: Block vs allow vs redirect
- Top blocked agents: Which agents are being stopped
- False positive candidates: Low-confidence blocks to review
Policy Condition Reference
Detection Fields
| Field | Type | Values | Description |
|---|---|---|---|
detection_class | string | human, ai_agent, bot, incomplete_data | Classification result |
confidence | number | 0-100 | Detection confidence |
agent_name | string | ChatGPT, Claude, Googlebot, etc. | Identified agent |
agent_type | string | chatgpt, claude, crawler, etc. | Agent category |
signature_verified | boolean | true, false | Has valid crypto signature |
Request Fields
| Field | Type | Description |
|---|---|---|
path | string | Request path (e.g., /api/data) |
method | string | HTTP method (GET, POST, etc.) |
ip | string | Client IP address |
country | string | GeoIP country code |
user_agent | string | Raw User-Agent header |
Operators
| Operator | Usage | Example |
|---|---|---|
equals | Exact match | detection_class equals "ai_agent" |
not_equals | Not equal | agent_name not_equals "Googlebot" |
in | In list | agent_name in ["ChatGPT", "Claude"] |
not_in | Not in list | agent_name not_in ["Googlebot"] |
contains | String contains | path contains "/api" |
starts_with | String prefix | path starts_with "/admin" |
greater_than | Numeric greater than | confidence greater_than 70 |
less_than | Numeric less than | confidence less_than 50 |
Common Policy Patterns
Pattern 1: Protect Sensitive Endpoints
name: Protect Admin API
priority: 1
conditions:
path_starts_with: '/admin'
detection_class_in: ['ai_agent', 'bot']
action: 'block'
response:
status: 403
body: 'Access denied'Pattern 2: Allow SEO Bots, Block Scrapers
# Policy 1: Allow known SEO bots
name: Allow SEO Crawlers
priority: 1
conditions:
detection_class: "bot"
agent_name_in: ["Googlebot", "Bingbot", "DuckDuckBot", "Yandex"]
action: "allow"
# Policy 2: Block other bots
name: Block Unknown Bots
priority: 2
conditions:
detection_class: "bot"
action: "block"Pattern 3: Geo-Restricted Enforcement
name: Block Foreign Bots
priority: 1
conditions:
detection_class_in: ['ai_agent', 'bot']
country_not_in: ['US', 'CA', 'GB']
action: 'block'Pattern 4: Confidence-Based Enforcement
# High confidence → block immediately
name: Block High-Confidence AI
priority: 1
conditions:
detection_class: "ai_agent"
confidence_greater_than: 80
action: "block"
# Medium confidence → challenge
name: Challenge Medium-Confidence
priority: 2
conditions:
detection_class: "ai_agent"
confidence_greater_than: 50
action: "challenge"
# Low confidence → allow but log
name: Log Low-Confidence
priority: 3
conditions:
detection_class: "ai_agent"
action: "allow"
log: truePattern 5: Time-Based Rules
name: Block Off-Hours Automation
priority: 1
conditions:
detection_class_in: ['ai_agent', 'bot']
# Only active outside business hours (configured separately)
time_of_day_not_between: ['09:00', '17:00']
action: 'block'Pattern 6: Path-Based Allow List
# Allow bots to access public pages
name: Allow Public Access
priority: 1
conditions:
path_in: ["/", "/about", "/pricing", "/blog"]
action: "allow"
# Block bots from API
name: Protect API
priority: 2
conditions:
path_starts_with: "/api"
detection_class_in: ["ai_agent", "bot"]
action: "block"Best Practices
1. Start Permissive, Then Tighten
Begin with logging-only policies to understand your traffic:
name: Log All Agents
conditions:
detection_class_in: ['ai_agent', 'bot']
action: 'allow'
log: trueAfter reviewing the logs, add blocking rules for unwanted traffic.
2. Use Confidence Thresholds
Don't block low-confidence detections — they may be false positives:
conditions:
detection_class: 'ai_agent'
confidence_greater_than: 70 # Only block high confidence3. Maintain an Allow List
Always have policies for legitimate automation:
# Priority 1 — Allow list
name: Allowed Bots
conditions:
agent_name_in: ['Googlebot', 'Bingbot', 'YourInternalBot']
action: 'allow'
# Priority 2+ — Block rules come after4. Monitor and Iterate
- Check analytics weekly for false positives
- Review blocked requests with low confidence
- Adjust thresholds based on real-world data
Troubleshooting
Policy Not Matching
| Symptom | Cause | Fix |
|---|---|---|
| Agent not blocked | Higher-priority allow policy | Check policy order |
| All traffic blocked | Conditions too broad | Add specificity to conditions |
| Inconsistent behavior | Multiple matching policies | Review priority order |
False Positives
- Lower confidence threshold — Increase
confidence_greater_than - Add to allow list — Whitelist legitimate user agents
- Use logging first — Test with
action: "allow", log: true
What You Learned
- How to create and configure enforcement policies
- Policy conditions, operators, and fields
- Common patterns for real-world enforcement
- Best practices for policy management
Next Steps
| Goal | Action |
|---|---|
| Set up detection first | Gateway or Middleware |
| Authorize AI agents | Govern Overview |
| Monitor traffic | Dashboard Analytics |