I’m running a script that gathers items from the webflow database using the API and pushes to another system when certain changes have occured. The script runs every 5 minutes.
We are bumping into Rate Limits in unexpected ways.
Firstly, a minor point: the error being returned is a 400, where your docs say it should be a 429.
An example log message from the script:
ERROR 400 Bad Request {“msg”:“Rate limit hit”,“code”:400,“name”:“RateLimit”,“path”:“/collections/5c5c0728d70742dc0c904573/items”,“err”:“RateLimit: Rate limit hit”}
Secondly, my chief concern: although the script runs every 5 minutes, the rate limit is not being reset between runs.
For each run, the script makes 1 request to gather items from a collection using the GET /collections/:collection_id/items endpoint.
The X-RateLimit-Remaining header decrements by 1 every 2 runs. I.e. every 10 minutes it goes from e.g. 56 remaining to 55, to 54, and so on. When it gets to 0, I get a run of 400s and then it seems to reset.
My expectation here is that between every run, the X-RateLimit-Remaining header should reset to 60 - is this correct?
I can confirm inconsistencies, since I have a vaguely similar issue.
We use bottleneck to rate-limit the requests our application makes to the Webflow API when updating CMS items from an XML source. But even though I’ve gradually made the limiting more and more restrictive, we’re still hitting Rate Limit errors frequently.
The most confusing aspect is that after the initialization, which requires a maximum of 3 requests to fetch the required items, the first write operation starts with a random-looking low number of remaining requests (I’ve seen 8 and 1 in the last few days). This value is then irregularly replenished by a varying amount every couple of requests. This happens even though our application makes less than 1 request a second:
const limiter = new Bottleneck({
reservoir: 9, // initial value
reservoirRefreshAmount: 9,
reservoirRefreshInterval: 10 * 1000, // must be divisible by 250
// also use maxConcurrent and/or minTime for safety
maxConcurrent: 1,
minTime: 1100
});
I don’t really understand what’s happening here, and it’s causing loss of data from import to import. Also the server is creating duplicate content – but that’s a separate issue.