Rate Limits
The CSE Registry API implements rate limiting to ensure fair usage and protect service availability. This page explains the limits and how to work within them.
Rate Limit Tiers
| Tier | Requests/Day | Requests/Minute | Price |
|---|---|---|---|
| Community | 10,000 | 60 | $0 |
| Pro | 100,000 | 300 | Contact us |
| Enterprise | Unlimited | 1,000+ | Contact us |
Rate Limit Headers
Every API response includes headers indicating your current rate limit status:
| Header | Description |
|---|---|
| X-RateLimit-Limit | Maximum requests per minute |
| X-RateLimit-Remaining | Remaining requests in current window |
| X-RateLimit-Reset | Unix timestamp when the window resets |
| X-RateLimit-Daily-Limit | Maximum requests per day |
| X-RateLimit-Daily-Remaining | Remaining daily requests |
Example Response Headers
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 58
X-RateLimit-Reset: 1703779200
X-RateLimit-Daily-Limit: 10000
X-RateLimit-Daily-Remaining: 9542Rate Limit Exceeded
When you exceed the rate limit, the API returns a 429 status code:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
Retry-After: 45
{
"error": {
"code": "rate_limited",
"message": "Rate limit exceeded. Try again in 45 seconds.",
"details": {
"limit": 60,
"remaining": 0,
"reset_at": "2024-12-28T15:01:00Z",
"retry_after": 45
}
}
}Best Practices
1. Implement Exponential Backoff
When rate limited, wait before retrying with increasing delays:
import time
import requests
def api_request_with_retry(url, headers, max_retries=5):
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
wait_time = min(retry_after * (2 ** attempt), 300)
print(f"Rate limited. Waiting {wait_time} seconds...")
time.sleep(wait_time)
continue
return response
raise Exception("Max retries exceeded")2. Cache Responses
Signal definitions rarely change. Cache responses to reduce API calls:
import functools
from datetime import datetime, timedelta
# Simple in-memory cache with 1-hour TTL
@functools.lru_cache(maxsize=1000)
def get_signal_cached(signal_id, cache_key=None):
# cache_key changes hourly to refresh cache
response = requests.get(
f"https://api.cseregistry.org/v1/signals/{signal_id}",
headers={"Authorization": f"Bearer {API_KEY}"}
)
return response.json()
# Use hourly cache key
cache_key = datetime.now().strftime("%Y-%m-%d-%H")
signal = get_signal_cached("CSE-HIPAA-TECH-ENCRYPT-REST-001", cache_key)3. Use Bulk Endpoints
Fetch multiple signals at once instead of individual requests:
# Instead of 75 individual requests for HIPAA signals...
for signal_id in hipaa_signal_ids:
response = requests.get(f"/signals/{signal_id}")
# Use a single filtered request
response = requests.get("/signals?domain=HIPAA&per_page=100")4. Monitor Your Usage
Check rate limit headers before making requests:
def check_rate_limit(response):
remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
if remaining < 10:
print(f"Warning: Only {remaining} requests remaining this minute")
daily_remaining = int(response.headers.get('X-RateLimit-Daily-Remaining', 0))
if daily_remaining < 100:
print(f"Warning: Only {daily_remaining} requests remaining today")5. Use GitHub Raw URLs for Static Data
For simple use cases that don't need search or filtering, fetch data directly from GitHub without rate limits:
# No authentication or rate limits
curl https://raw.githubusercontent.com/cse-registry/cse-registry/main/registry.jsonUpgrading Your Plan
If you need higher limits, contact us to discuss Pro or Enterprise plans:
- Pro tier: For production applications with moderate traffic
- Enterprise tier: For high-volume integrations, custom SLAs, and dedicated support
Email api@cseregistry.org or contact us for more information.