The Official Rate Limits
The SEC publishes clear rules for programmatic access to EDGAR:
| Rule | Details |
|---|---|
| Rate Limit | Maximum 10 requests per second per IP address |
| User-Agent | Required. Must include company/app name and contact email |
| Daily Limit | None. Unlimited requests as long as you stay under 10/sec |
| Authentication | None required. No API key needed |
| Block Duration | ~10 minutes for first offense. Repeated violations may be longer |
These limits apply across all EDGAR domains: data.sec.gov, efts.sec.gov, and www.sec.gov.
The User-Agent Header Requirement
This is the number one cause of 403 errors for new developers. The SEC rejects all requests that lack a proper User-Agent header.
Correct Format
User-Agent: CompanyName [email protected]
Replace with your actual name/organization and email. The SEC uses this to contact you if your application causes issues.
Python
import requests
headers = {'User-Agent': 'MyFinApp [email protected]'}
response = requests.get('https://data.sec.gov/submissions/CIK0000320193.json',
headers=headers)
JavaScript
const response = await fetch('https://data.sec.gov/submissions/CIK0000320193.json', {
headers: { 'User-Agent': 'MyFinApp [email protected]' }
});
cURL
curl -H "User-Agent: MyFinApp [email protected]" \
"https://data.sec.gov/submissions/CIK0000320193.json"
What Happens When You Get Blocked
When you exceed the rate limit or omit the User-Agent header:
- You receive an HTTP 403 Forbidden response
- Your IP is blocked for approximately 10 minutes
- All requests from your IP to any EDGAR domain will fail during the block
- After the block expires, access is restored automatically
Repeated violations or aggressive scraping patterns may result in longer blocks or a permanent ban. The SEC has blocked entire cloud provider IP ranges in the past.
Implementing Proper Rate Limiting
Do not rely on the 10 req/sec limit. Target 8 requests per second to leave margin for error:
Python Rate Limiter
import time
import requests
from collections import deque
class SECRateLimiter:
"""Rate limiter that enforces max requests per second."""
def __init__(self, max_per_second=8):
self.min_interval = 1.0 / max_per_second
self.timestamps = deque(maxlen=max_per_second)
self.session = requests.Session()
self.session.headers['User-Agent'] = 'MyApp [email protected]'
def get(self, url, **kwargs):
"""Make a rate-limited GET request."""
now = time.time()
# If we've made max requests, wait until the oldest one is old enough
if len(self.timestamps) >= self.timestamps.maxlen:
elapsed = now - self.timestamps[0]
if elapsed < 1.0:
time.sleep(1.0 - elapsed)
self.timestamps.append(time.time())
return self.session.get(url, **kwargs)
# Usage
limiter = SECRateLimiter(max_per_second=8)
ciks = ['320193', '789019', '1652044', '1018724', '1318605']
for cik in ciks:
response = limiter.get(
f'https://data.sec.gov/submissions/CIK{cik.zfill(10)}.json'
)
print(f'CIK {cik}: {response.status_code}')
JavaScript Rate Limiter
class SECRateLimiter {
constructor(maxPerSecond = 8) {
this.minInterval = 1000 / maxPerSecond;
this.lastRequest = 0;
}
async fetch(url, options = {}) {
const now = Date.now();
const elapsed = now - this.lastRequest;
if (elapsed < this.minInterval) {
await new Promise(r => setTimeout(r, this.minInterval - elapsed));
}
this.lastRequest = Date.now();
return fetch(url, {
...options,
headers: {
'User-Agent': 'MyApp [email protected]',
...options.headers
}
});
}
}
const limiter = new SECRateLimiter(8);
const response = await limiter.fetch(
'https://data.sec.gov/submissions/CIK0000320193.json'
);
Exponential Backoff for Errors
When you hit an error (403, 429, or 5xx), do not retry immediately. Use exponential backoff:
import time
import requests
def sec_request_with_retry(url, headers, max_retries=3):
"""Make an SEC API request with exponential backoff on failure."""
for attempt in range(max_retries):
try:
response = requests.get(url, headers=headers, timeout=30)
if response.status_code == 200:
return response
if response.status_code == 403:
# Blocked! Wait longer on each attempt
wait = 60 * (2 ** attempt) # 60s, 120s, 240s
print(f' 403 blocked. Waiting {wait}s before retry...')
time.sleep(wait)
continue
if response.status_code >= 500:
# Server error, brief retry
wait = 5 * (2 ** attempt)
print(f' Server error {response.status_code}. Retrying in {wait}s...')
time.sleep(wait)
continue
# Other client errors (404, etc) - don't retry
response.raise_for_status()
except requests.exceptions.Timeout:
wait = 10 * (2 ** attempt)
print(f' Timeout. Retrying in {wait}s...')
time.sleep(wait)
raise Exception(f'Failed after {max_retries} retries: {url}')
Caching Strategies
The best request is one you never make. SEC data changes infrequently, so caching is highly effective:
Simple File Cache
import json
import os
import hashlib
import time
CACHE_DIR = '.sec_cache'
CACHE_TTL = 86400 # 24 hours
def cached_sec_request(url, headers):
"""Fetch from cache or make API request."""
os.makedirs(CACHE_DIR, exist_ok=True)
# Create a cache key from the URL
key = hashlib.md5(url.encode()).hexdigest()
cache_path = os.path.join(CACHE_DIR, f'{key}.json')
# Check cache
if os.path.exists(cache_path):
age = time.time() - os.path.getmtime(cache_path)
if age < CACHE_TTL:
with open(cache_path) as f:
return json.load(f)
# Fetch from API
response = requests.get(url, headers=headers)
response.raise_for_status()
data = response.json()
# Save to cache
with open(cache_path, 'w') as f:
json.dump(data, f)
return data
Cache TTL Recommendations
- company_tickers.json — Cache for 24 hours (updates daily)
- companyfacts — Cache for 24 hours (updates when new filings are processed)
- submissions — Cache for 6-12 hours (new filings appear throughout the day)
- Filing documents — Cache indefinitely (filings never change once published)
- Full-text search results — Cache for 1 hour (index updates continuously)
Bulk Data Alternatives
For large-scale data collection, the SEC provides bulk download files that are far more efficient than individual API calls:
| File | URL | Contains | Size |
|---|---|---|---|
| company_tickers.json | sec.gov/files/company_tickers.json | All tickers and CIK numbers | ~1 MB |
| companyfacts.zip | sec.gov/files/dera/data/financial-statement-data-sets/ | All XBRL financial data | ~7 GB |
| submissions.zip | sec.gov/files/dera/data/ | All filing metadata | ~2 GB |
| full-index | sec.gov/Archives/edgar/full-index/ | Filing indexes by quarter | Varies |
If you need financial data for all 13,000+ companies, downloading companyfacts.zip once is infinitely faster than making 13,000 individual API calls.
Common Mistakes
- Missing User-Agent: The most common cause of 403 errors. Always include it
- Parallel requests: Running 50 concurrent requests blows past the rate limit instantly
- No caching: Re-fetching the same data wastes your rate limit budget
- Retrying immediately on 403: This makes the block longer. Wait at least 60 seconds
- Generic User-Agent strings: Do not use "Python-requests/2.28" or "Mozilla/5.0". Use a descriptive string with your email
- Ignoring bulk files: Do not make 10,000 API calls when a single zip download exists
Checklist for Production Applications
- User-Agent header set with company name and email
- Rate limiter targeting 8 req/sec (not 10)
- Exponential backoff on 403 and 5xx errors
- Local file cache with appropriate TTL per endpoint
- Bulk data files used for initial data loads
- Request timeouts set (30 seconds recommended)
- Error logging to detect rate limit issues early
FAQ
What is the SEC EDGAR API rate limit?
Maximum 10 requests per second per IP address across all EDGAR domains. No daily limits.
What happens if I exceed the SEC EDGAR rate limit?
Your IP receives a 403 Forbidden response and is blocked for approximately 10 minutes. All requests from your IP to EDGAR will fail during the block.
What User-Agent header does the SEC require?
A string containing your company or app name and a contact email. Format: "CompanyName [email protected]".
How do I avoid getting blocked by SEC EDGAR?
Include a proper User-Agent header, limit to 8 requests per second, cache responses locally, use bulk data files for large datasets, and implement exponential backoff on errors.
Where can I download SEC data in bulk?
The SEC provides bulk downloads at sec.gov/files/ including company_tickers.json, companyfacts.zip, and submissions.zip. Updated daily.
Related Guides
- Free SEC EDGAR API Guide — Complete overview of all EDGAR API endpoints
- SEC CIK Number Lookup Guide — Find any company's CIK instantly
- SEC EDGAR Full-Text Search API — Search the text of every filing
- Build a Stock Screener with the SEC EDGAR API — Python tutorial
- Download SEC 10-K Filings Programmatically — Python & JavaScript guide