Rate limiting
Calls to our GraphQL API are rate limited to provide equitable access to the API for everyone and to prevent abuse. We are going to be evolving these limits as we gather more information, and encourage your feedback. Any changes to limits will be announced in our Slack community's API announcements channel.
We use the leaky bucket algorithm for our rate limiters, which means that your tokens are refilled with a constant rate of LIMIT_AMOUNT / LIMIT_PERIOD.
If you temporarily require higher limits, you can request them by contacting Linear support where we'll review them on a case by case basis.

Avoiding hitting limits

There are ways of using our APIs that will in most cases avoid hitting our rate limits.

Polling

One thing that we especially discourage is polling the API to fetch updates. If you need to know when data updates in Linear, you should use our Webhook functionality.

Fetching unneeded data

Avoid fetching data you don't need by using our filtering functionality. This way you can drill down on specific records only and avoid pagination in some cases.

Ordering data

In certain cases where you do need to fetch all data, we suggest sorting it by the updated timestamp instead of when it was created. This way you can get the most recently changed data first, and avoid paginating through the entire dataset.

Write custom, specific queries

This applies especially if you're using our SDK. If you're fetching lots of different entities or dependencies, or have specific data needs, it's always recommended to write your own custom GraphQL queries and use filters to narrow down the data as much as possible.

API request limits

We limit the amount of requests you make to our GraphQL API. To make it easier to keep track and avoid going over the limits, there are 3 HTTP response headers we send back on each request.
Header Name
Description
X-RateLimit-Requests-Limit
The maximum number of API requests you're permitted to make per hour.
X-RateLimit-Requests-Remaining
The number of API requests remaining in the current rate limit window.
X-RateLimit-Requests-Reset
The time at which the current rate limit window resets in UTC epoch seconds.

Limits

Auth method
Amount
Per
Period
API key
1,500
User
1 hour
OAuth App
TBD
User/App
1 hour
Unauthenticated
60
IP
1 hour
When authenticated using an API key you can make up to 1,500 requests per hour. Requests are associated with the authenticated user, which means all requests by the same user share the same quota even when using different API keys.
When making unauthenticated requests, you are limited to 60 requests per hour. These requests are associated with the originating IP address instead of the user making the request.

API request complexity limits

In order to protect our system from queries that are too complex and resource intensive, we calculate the complexity of each query, based on the amount of requested data.
To make it easier to keep track and avoid going over the limits, there are 4 HTTP response headers we send back on each request.
Header Name
Description
X-Complexity
The complexity of the query.
X-RateLimit-Complexity-Limit
The maximum number of API complexity points you're permitted to request per hour.
X-RateLimit-Complexity-Remaining
The number of points of API request complexity remaining in the current rate limit window.
X-RateLimit-Complexity-Reset
The time at which the current rate limit window resets in UTC epoch seconds.

Limits

Auth
Amount
Per
Period
API key
250,000
User
1 hour
OAuth app
TBD
User/App
1 hour
Unauthenticated
10,000
IP
1 hour
Requests authenticated using an API key can request up to 250,000 points per hour. Requests are associated with the authenticated user, which means all requests by the same user share the same quota even when using different API keys.
Unauthenticated requests are limited to 10,000 points per hour. These requests are associated with the originating IP address instead of the user making the request.

Maximum complexity of a single query

We also enforce a maximum complexity of a single query at any time to 10,000 points. Your query will always get rejected if it exceeds that.

Understanding query complexity

In order to protect our systems from too complex and resource intensive queries, we calculate the complexity of each query. Each property is 0.1 point, each object is 1 point and any connection multiplies its children's points based on the given pagination argument, or the default 50. The score is then rounded up to the nearest integer.
Examples:
Let's fetch an object that returns only one user and request only one property.
The calculation is 1 + 0.1 = 1.1, which equals a complexity of 2 when rounded up.
query WhoAmI {
user(id: "me") {
name
}
}
Let's now fetch all of our created issue's ID, title and when they were created. This has a complexity of 66. Here's why:
1 point
Getting the user
50 points = 50 × 1
Getting the issues (assume 50, the default pagination)
15 points = 50 × 3 × 0.1
Getting 3 attributes for 50 issues
query MyCreatedIssues {
user(id: "me") {
createdIssues {
nodes {
id
title
createdAt
}
}
}
}
You can use pagination parameters to specify a different limit than the default 50 to let the complexity calculator know how much data you're trying to fetch. This query with an explicit limit of the first 10 nodes then has a complexity of 14.
1 + (10 × 1) + (10 × 3 × 0.1) = 14
query MyCreatedIssues {
user(id: "me") {
createdIssues(first: 10) {
nodes {
id
title
createdAt
}
}
}
}