I'm a technologist, speaker, and Pluralsight author and I specialize in building full-stack solutions with a focus on modern web technology and cloud native architecture.
The Steam Web API implements "rate limiting" meaning that if you call it too many times too quickly it returns a
HTTP 429 Too Many Requests response. According to the terms the rate limit is 100,000 requests per day, which is pretty generous. But if you're thinking of syncing 2000 users every 15 minutes, that puts you two times over the limit! So you need a throttling mechanism to defer processing once you reach the limit. In most scenarios like this, public APIs will return some useful HTTP headers that let you know what your current request count is but in this case, the Steam API does no such thing (it's a bit dated).
There are a few ways to rate limit or throttle outgoing requests to an API like this but most approaches don't work with clustering meaning multiple isolated clients. Approaches like using slim semaphore or limiter don't cut it because those only work in-memory. We need a backing store to coordinate counting requests across a cluster. Bottleneck is one npm package that supports this but it can only use Redis. Since I don't use Redis (and I'm using C#) that wasn't an option for me. Instead I turned to RavenDB for the solution and it's been working out well!
No spam and I usually send a newsletter once a quarter with content you won't see on the blog.