Shopify API rate limits
To ensure our platform remains stable and fair for everyone, all Shopify APIs are rate-limited. We use a variety of strategies to enforce rate limits. We ask developers to useindustry standard techniquesfor limiting calls, caching results, and re-trying requests responsibly.
Compare rate limits by API
Anchor link to section titled "Compare rate limits by API"Shopify APIs use several differentrate-limiting methods. They’re described in more detail below, but these are the key figures in brief:
API | Rate-limiting method | Standard limit | Shopify Plus limit | Commerce Components by Shopify limit |
---|---|---|---|---|
Admin API(GraphQL) | Calculated query cost | 50 points/second | 500 points/second | None |
Admin API(REST) | Request-based limit | 2 requests/second | 20 requests/second | None |
Storefront API | Time-based limit | minimum 0.5s per request, 60s per user IP | minimum 0.5s per request, 120s per user IP | None |
Payments Apps API(GraphQL) | Calculated query cost | 910 points/second | 1820 points/second | None |
The leaky bucket algorithm
Anchor link to section titled "The leaky bucket algorithm"All Shopify APIs use aleaky bucket algorithmto manage requests. This algorithm lets your app make an unlimited amount of requests in infrequent bursts over time.
The main points to understand about the leaky bucket metaphor are as follows:
- Each app has access to a bucket. It can hold, say, 60 “marbles”.
- Each second, a marble is removed from the bucket (if there are any). That way there’s always more room.
- Each API request requires you to toss a marble in the bucket.
- If the bucket gets full, you get an error and have to wait for room to become available in the bucket.
This model ensures that apps that manage API calls responsibly will always have room in their buckets to make a burst of requests if needed. For example, if you average 20 requests (“marbles”) per second but suddenly need to make 30 requests all at once, you can still do so without hitting your rate limit.
The basic principles of the leaky bucket algorithm apply to all our rate limits, regardless of the specificmethodsused to apply them.
Rate limiting methods
Anchor link to section titled "Rate limiting methods"Shopify uses three different methods for managing rate limits. Different APIs use different methods depending on use case, so make sure you understand the various types of rate limits your apps will encounter:
Request-based limits
Anchor link to section titled "Request-based limits"Apps can make a maximumnumberof requests per minute. For example: 40 API requests within 60 seconds. Each request counts equally, regardless of how much or how little data is returned.
This method is used by the REST Admin API.
Time-based limits
Anchor link to section titled "Time-based limits"Apps can make requests that take a maximum amount oftimeper minute. For example: 120 requests within 60 seconds, with each request taking 0.5 seconds to return. More complex requests take longer, and therefore take up a proportionally larger share of the limit.
This method is used by the Storefront API.
Calculated query costs
Anchor link to section titled "Calculated query costs"Apps can make requests that cost a maximum number ofpointsper minute. For example: 1000 points within 60 seconds. More complex requests cost more points, and therefore take up a proportionally larger share of the limit.
This method is used by the GraphQL API
GraphQL Admin API rate limits
Anchor link to section titled "GraphQL Admin API rate limits"Calls to the GraphQL Admin API are limited based oncalculated query costs, which means you should consider thecostof requests over time, rather than thenumberof requests.
GraphQL Admin API rate limits are based on the combination of the app and store. This means that calls from one app don't affect the rate limits of another app, even on the same store. Similarly, calls to one store don't affect the rate limits of another store, even from the same app.
Each combination of app and store is given a bucket of 1000 cost points, with a leak rate of 50 cost points per second. This means that the total cost of your queries cannot exceed 1,000 points at any given time, and that room is created in the app’s bucket at a rate of 50 points per second. By making simpler, low-cost queries, you can make more queries over time.
The limit uses a combination of the requested and the actual query cost. Before execution begins, the app’s bucket must have enough room for the requested cost of the query. When execution is complete, the bucket is refunded the difference between the requested cost and the actual cost of the query.
Cost calculation
Anchor link to section titled "Cost calculation"Every field in the schema has an integer cost value assigned to it. The cost of a query is the sum of the costs of each field. Running a query is the best way to know the true cost of a query.
By default, a field's cost is based on what the field returns:
Field returns | Cost value |
---|---|
Scalar | 0 |
Enum | 0 |
Object | 1 |
Interface | 1 |
Union | 1 |
Although these default costs are in place, Shopify also reserves the right to set manual costs on fields.
Requested and actual cost
Anchor link to section titled "Requested and actual cost"Shopify calculates the cost of a query both before and after query execution. The requested cost is based on the number of fields requested. The actual cost is based on the results returned, since the query can end early due to an object type field returning null, or connection fields can return fewer edges than requested.
Single query limit
Anchor link to section titled "Single query limit"A single query to the API cannot exceed a cost of 1,000. This limit is enforced before a query is executed based on the query’s requested cost.
Maximum input array size limit
Anchor link to section titled "Maximum input array size limit"Input arguments that accept an array have a maximum size of 250. Queries and mutations return an error if an input array exceeds 250 items.
GraphQL response
Anchor link to section titled "GraphQL response"The response includes information about the cost of the request and the state of the throttle. This data is returned under theextensions
key:
To get a detailed breakdown of how each field contributes to the requested cost, you can include the header'X-GraphQL-Cost-Include-Fields': true
in your request.
Bulk operations
Anchor link to section titled "Bulk operations"To query and fetch large amounts of data, you should use批量操作instead of single queries. Bulk operations are designed for handling large amounts of data, and they don't have the max cost limits or rate limits that single queries have.
其他管理API lim率its
Anchor link to section titled "REST Admin API rate limits"Calls to the REST Admin API are governed byrequest-based limits, which means you should consider the totalnumberof API calls your app makes. In addition, there are resource-based rate limits and throttles.
其他管理API lim率its are based on the combination of the app and store. This means that calls from one app don't affect the rate limits of another app, even on the same store. Similarly, calls to one store don't affect the rate limits of another store, even from the same app.
Limits are calculated using the leaky bucket algorithm. All requests that are made after rate limits have been exceeded are throttled and an HTTP429 Too Many Requests
error is returned. Requests succeed again after enough requests have emptied out of the bucket. You can see the current state of the throttle for a store by using the rate limits header.
Thebucket sizeandleak rateproperties determine the API’s burst behavior and request rate.
The default settings are as follows:
- Bucket size:
40 requests/app/store
- Leak rate:
2/second
If the bucket size is exceeded, then an HTTP429 Too Many Requests
error is returned. The bucket empties at a leak rate of two requests per second. To avoid being throttled, you can build your app to average two requests per second. The throttle is a pass or fail operation. If there is available capacity in your bucket, then the request is executed without queueing or processing delays. Otherwise, the request is throttled.
有一个额外的GET请求的速率限制. When the value of thepage
parameter results in an offset of over 100,000 of the requested resource, a429 Too Many Requests
error is returned. For example, a request toGET /admin/collects.json?limit=250&page=401
would generate an offset of 100,250 (250 x 401 = 100,250) and return a 429 response.
Rate limits header
Anchor link to section titled "Rate limits header"You can check how many requests you’ve already made using the ShopifyX-Shopify-Shop-Api-Call-Limit
header that was sent in response to your API request. This header lists how many requests you’ve made for a particular store. For example:
In this example,32
is the current request count and40
是斗大小。请求数减少交流cording to the leak rate over time. For example, if the header displays39/40
requests, then after a wait period of ten seconds, the header displays19/40
requests.
Retry-After header
Anchor link to section titled "Retry-After header"When a request goes over a rate limit, a429 Too Many Requests
error and aRetry-After
header are returned. TheRetry-After
header contains the number of seconds to wait until you can make a request again. Any request made before the wait time has elapsed is throttled.
Storefront API rate limits
Anchor link to section titled "Storefront API rate limits"The Storefront API scales to support sales of all sizes.
Calls to the Storefront API are governed bytime-based limits, which means you should consider thetimeyour requests take to complete, rather than thenumberof requests.
The time-based limit applies to the IP address of the request rather than to the app itself. Your app won’t be throttled due to an increase in buyer traffic, because there’s no limit to the number of unique customers making requests to the storefront. The time-based limit also protects your storefront from single users, such as bots, from consuming a high level of capacity.
Thebucket sizeandleak rateproperties determine the API’s burst behavior and request rate. The following are the default settings:
- Bucket size:
60 seconds/app/IP address
- Leak rate:
1/second
Every request to the Storefront API costs a minimum of 0.5 seconds to run. After a request completes, the total elapsed time is calculated and subtracted from the bucket.
Bucket limit example
Anchor link to section titled "Bucket limit example"Suppose the client makes several parallel API requests when a user loads your app:
- 20 simple queries that each take 0.5 seconds or less
- 15 more complex queries that take 1 second each
- 10 highly complex queries that take 2 seconds each
The total cost would be: (20 ⨉ 0.5) + (15 ⨉ 1.0) + (10 ⨉ 2.0) = 45 seconds.
In this scenario you would still have 15 seconds’ worth of queries available.
Checkout-level throttle
Anchor link to section titled "Checkout-level throttle"Shopify limits the amount of checkouts that can be created on the Storefront API per minute. If an API client exceeds this throttle, then a200 Throttled
error response is returned. Shopify recommends designing your app to be resilient to this scenario. For example, you could implement a request queue with an exponential backoff algorithm.
Maximum input array size limit
Anchor link to section titled "Maximum input array size limit"Input arguments that accept an array have a maximum size of 250. Queries and mutations return an error if an input array exceeds 250 items.
Resource-based rate limits
Anchor link to section titled "Resource-based rate limits"The following Admin API resources, in both GraphQL and REST versions, have an additional throttle that takes effect when a store has 50,000 product variants. After this threshold is reached, no more than 1,000 new variants can be created per day.
In certain cases, Shopify needs to enforce rate limiting in order to prevent abuse of the platform. Therefore, your app should be prepared to handle rate limiting on all endpoints, rather than just those listed here.
GraphQL mutations
Anchor link to section titled "GraphQL mutations"REST endpoints
Anchor link to section titled "REST endpoints"If an app reaches API rate limits for a specific resource, then it receives a429 Too Many Requests
response, and a message that a throttle has been applied.
Avoiding rate limit errors
Anchor link to section titled "Avoiding rate limit errors"Designing your app with best practices in mind is the best way to avoid throttling errors. For example, you can stagger API requests in a queue and do other processing tasks while waiting for the next queued job to run. Consider the following best practices when designing your app:
- Optimize your code to only get the data that your app requires.
- Use caching for data that your app uses often.
- Regulate the rate of your requests for smoother distribution.
- Include code that catches errors. If you ignore these errors and keep trying to make requests, then your app won’t be able to gracefully recover.
- Use metadata about your app’s API usage, included with all API responses, to manage your app’s behavior dynamically.
- Your code should stop making additional API requests until enough time has passed to retry. The recommended backoff time is 1 second.