-
Notifications
You must be signed in to change notification settings - Fork 67
rate-limiting: propagate back-pressure from queue as HTTP 429's #179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
0413f83
to
0dbc4a7
Compare
0dbc4a7
to
1e85985
Compare
src/main/java/org/logstash/plugins/inputs/http/util/ExecutionObserver.java
Outdated
Show resolved
Hide resolved
src/main/java/org/logstash/plugins/inputs/http/util/ExecutionObserver.java
Outdated
Show resolved
Hide resolved
src/main/java/org/logstash/plugins/inputs/http/util/ExecutionObserver.java
Outdated
Show resolved
Hide resolved
src/main/java/org/logstash/plugins/inputs/http/util/ExecutionObserver.java
Outdated
Show resolved
Hide resolved
7200325
to
033f1d1
Compare
Adds a proactive handler that rejects new requests with HTTP 429's when the queue has been blocking for more than 10 consecutive seconds, allowing back- pressure to propagate in advance of filling up the connection backlog queue.
033f1d1
to
6bce60d
Compare
Rebased and targeted to 3.x branch (for LS 8.x series), will need a forward-port to 4.x/main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall implementation details looks good to me (will run on my local to test)!
Some questions I would like to clarify:
- We have thread pool with
maxPendingRequests
. Is there possibilityIMessageHandler#onNewMessage
take much time (>10s) but no pending requests in thread pool? If so, involving pending request int logic would also make sense to me. - I wonder if we need to make hard coded 10s configurable as an expert-only config. For very intensive applications 10s may even be long as my understanding.
They are two different safeguards, and do not need to be intertwined. If the queue is full and a write to the queue is blocking, even a single request made before the block is lifted should result in a 429. The prior 429 safeguard only kicked in if there was a sufficient quantity of pending requests, and will still kick in at that threshold of concurrent pending requests regardless of blockage status. This is as-desired
We can make it configurable later, if necessary. I don't see an immediate reason to make it configurable now, especially as making the option intuitive for a user to configure would be very difficult. The 10s threshold is meant to make it reactive enough in a fully-blocked state without causing superfluous 429's for intermittent blockages, especially since most connecting clients (unfortunately) don't use http's |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thank you so much for this, it is very helpful.
…tash-plugins#179) Adds a proactive handler that rejects new requests with HTTP 429's when the queue has been blocking for more than 10 consecutive seconds, allowing back- pressure to propagate in advance of filling up the connection backlog queue.
#186) * rate-limiting: propagate back-pressure from queue as HTTP 429's (#179) Adds a proactive handler that rejects new requests with HTTP 429's when the queue has been blocking for more than 10 consecutive seconds, allowing back- pressure to propagate in advance of filling up the connection backlog queue. * add link to CHANGELOG.md
Adds a proactive handler that rejects new requests with HTTP 429's when the queue has been blocking for more than 10 consecutive seconds, allowing back- pressure to propagate in advance of filling up the connection backlog queue.
While I'd like to take the time to break out the supplied
ExecutionObserver
into a separate library, I think it is better to roll with what we have than to hold back.