Skip to main content

Rate Limiting

Oproto APIs implement rate limiting to ensure fair usage and maintain service stability for all users.

Rate Limit Overview

Limit TypeDefaultWindow
Requests per second1001 second
Requests per minute1,0001 minute
Requests per day100,00024 hours
note

Rate limits may vary based on your subscription tier. Contact us for higher limits.


Rate Limit Headers

Every API response includes headers to help you track your usage:

HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 847
X-RateLimit-Reset: 1703275200
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window
X-RateLimit-RemainingRequests remaining in the current window
X-RateLimit-ResetUnix timestamp when the window resets

Handling Rate Limits

When you exceed the rate limit, you'll receive a 429 Too Many Requests response:

{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Please retry after 45 seconds.",
"details": {
"retryAfter": 45,
"limit": 1000,
"window": "1m"
}
}
}

The response includes a Retry-After header indicating how long to wait:

HTTP/1.1 429 Too Many Requests
Retry-After: 45
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1703275200

Best Practices

1. Monitor Rate Limit Headers

Track your usage proactively to avoid hitting limits:

function checkRateLimits(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));

if (remaining < limit * 0.1) {
console.warn(`Rate limit warning: ${remaining}/${limit} requests remaining`);
}
}

2. Implement Exponential Backoff

When rate limited, use exponential backoff with jitter:

async function requestWithBackoff(fn, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fn();

if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After')) || 60;
await sleep(retryAfter * 1000);
continue;
}

return response;
} catch (error) {
if (attempt === maxRetries - 1) throw error;

const delay = Math.min(1000 * Math.pow(2, attempt), 30000);
const jitter = Math.random() * 1000;
await sleep(delay + jitter);
}
}
}

3. Use Bulk Endpoints

When available, prefer bulk operations over multiple individual requests:

// ❌ Avoid: Multiple individual requests
for (const id of companyIds) {
await api.getCompany(id);
}

// ✅ Better: Single bulk request
await api.getCompanies({ ids: companyIds });

4. Cache Responses

Reduce API calls by caching responses when appropriate:

const cache = new Map();
const CACHE_TTL = 60000; // 1 minute

async function getCachedCompany(id) {
const cached = cache.get(id);

if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}

const data = await api.getCompany(id);
cache.set(id, { data, timestamp: Date.now() });
return data;
}

5. Spread Requests Over Time

For batch operations, distribute requests evenly:

async function batchProcess(items, ratePerSecond = 10) {
const delay = 1000 / ratePerSecond;

for (const item of items) {
await processItem(item);
await sleep(delay);
}
}

Rate Limits by Endpoint

Some endpoints have specific rate limits:

EndpointLimitWindow
POST /oauth2/token201 minute
GET /v1/search/*601 minute
POST /v1/*/bulk101 minute
All other endpointsDefault

Increasing Your Limits

If you need higher rate limits:

  1. Review your implementation for optimization opportunities
  2. Consider caching and bulk operations
  3. Contact us to discuss enterprise rate limits