-
-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add middleware for request prioritization #33951
base: main
Are you sure you want to change the base?
Conversation
f3096c1
to
6c499a9
Compare
TBH, according to your test result, I do not see it is really useful ....... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think at first step, just distinguish between login and not is enough. and looks it can be added for the api router also.
We've been running this patch for awhile, and while it takes a bit of tuning to get the right settings for a given server it has alleviated a lot of outage concerns from waves of scrapers that don't respect robots.txt.
This was in my initial version we ran, but then users who are attempting to login can still experience lower priority traffic. Since the major source of traffic for public code hosting are scrapers trying to download content, deprioritizing those routes mitigates that concern. |
So the original intention is to fight with crawlers/spiders/robots? |
Yes, I mentioned this in my description
|
Sorry, missed that part. The "perform the following" part and "result" part attracted my attention ..... If the intention is to "protect against malicious scrapers", I think the Or like https://github.com/go-gitea/gitea/pull/33951/files#r2006668195 said, mark the routes with priority |
I think I have some ideas about how to handle these "route paths" clearly. Could I push some commits into your branch? |
Could you send a PR to my branch, or sketch it out in another commit? This would prevent merge conflicts on my branch |
Sorry, don't quite understand what it exactly means ....
We can use chi router's "RoutePattern", and make code testable. |
Just getting back to this now as I was dealing with other priorities. I could incorporate the above changes, but the near complete rewrite of my PR makes it difficult to isolate the changes you are asking, and makes it incompatible with the production implementation we are running. This makes it difficult to test these changes against the live traffic we are seeing to ensure they still perform well. |
This adds a middleware for overload protection, that is intended to help protect against malicious scrapers. It does this by via [`codel`](https://github.com/bohde/codel), which will perform the following: 1. Limit the number of in-flight requests to some user defined max 2. When in-flight requests have reached their max, begin queuing requests, with logged in requests having priority above logged out requests 3. Once a request has been queued for too long, it has a percentage chance to be rejected based upon how overloaded the entire system is. When a server experiences more traffic than it can handle, this has the effect of keeping latency low for logged in users, while rejecting just enough requests from logged out users to keep the service from being overloaded. Below are benchmarks showing a system at 100% capacity and 200% capacity in a few different configurations. The 200% capacity is shown to highlight an extreme case. I used [hey](https://github.com/rakyll/hey) to simulate the bot traffic: ``` hey -z 1m -c 10 "http://localhost:3000/rowan/demo/issues?state=open&type=all&labels=&milestone=0&project=0&assignee=0&poster=0&q=fix" ``` The concurrency of 10 was chosen from experiments where my local server began to experience higher latency. Results | Method | Concurrency | p95 latency | Successful RPS | Requests Dropped | |--------|--------|--------|--------|--------| | QoS Off | 10 | 0.2960s | 44 rps | 0% | | QoS Off | 20 | 0.5667s | 44 rps | 0%| | QoS On | 20 | 0.4409s | 48 rps | 10% | | QoS On 50% Logged In* | 10 | 0.3891s | 33 rps | 7% | | QoS On 50% Logged Out* | 10 | 2.0388s | 13 rps | 6% | Logged in users were given the additional parameter ` -H "Cookie: i_like_gitea=<session>`. Tests with `*` were run at the same time, representing a workload with mixed logged in & logged out users. Results are separated to show prioritization, and how logged in users experience a 100ms latency increase under load, compared to the 1.6 seconds logged out users see.
I applied review feedback, and synthesized into something that is easily backportable to 1.23 by avoiding APIs added in
This allows key flows such as login and explore to still be prioritized well, with only the repo specific endpoints likely to be throttled. In my experience, this catches the vast majority of what scrapers are targeting. |
A question in my mind (not blocker): Should Gitea report 503 error to real anonymous end users when the instance is being crawled heavily (service is over capacity)? Disclaimer: I don't run a public instance, so maybe public instance site admins (like gitea.com) could help. |
Alternatively, a configuration item may be introduced to return a 503 status code or redirect users to a custom URL. |
The fundamental problem here is that if we send a redirect instead of a 503, we now generate more load when the user follows the redirect. This causes us to generate more load, instead of removing load. This can cause feedback loops if this newly added load also generates more redirects. |
Just to validate this, I prototyped the redirect method locally and ran benchmarks using
The results are below, but the redirect method shows worse performance on p95 latency than just not having QoS turned on, because of the added load redirects introduce. It does however show higher RPS (and mean latency), because rendering a login page is less intensive than the issue search.
|
I recommend using status code 429 (Too Many Requests) instead of 503 (Service Unavailable) for any rate-limited responses. The intent is more clear with that code and it blames the client, not the server. For bonus points, one could also send the |
This isn't quite the right semantics. 429 Too Many Requests means "that the user has sent too many requests in a given amount of time", which is not necessarily the case here. For example, a user can on their first request to the server have that request rejected by this middleware. This is not an issue with the client, but the server, in that it has determined it cannot service the request.
This would also apply to 503 Service Unavailable, but any value here is not likely to have a positive effect. In the case of load shedding like this, that field is useful when the clients are cooperative with the server, willingly backing off for that time period. However, crawlers are not cooperative (e.g. they do not respect robots.txt), so it is unlikely they would respect this field. Hypothetically, an operator of a public instance could implement this field and see how it affects crawlers, but the result would vary from crawler to crawler. |
That's not true according to the feedbacks of the existing "REQUIRE_SIGNIN_VIEW=expensive" config option. All users said that there is no more CPU load. |
This test result doesn't reflect real user scenarios, can't be used to prove that "redirection can't be used". There are just two requests for In a real world, when your server load is high, more than 99% CPU is for rendering the "expensive" pages, responding 503 or "/user/login" has almost no difference. |
The context for this statement is quite different from mine. This context does not apply to how this algorithm works though. This algorithm specifically tries to serve as much traffic as possible, while maintaining latency targets. To use the numbers above again, if we have 10 RPS to a 500ms endpoint, but only 2 cores, this algorithm would converge on 4 RPS of goodput, while rejecting 6 RPS. Adding the redirect changes that math. Still using those same numbers, those 6 RPS that were formerly rejected now each generate another 100ms of work, at a default priority, which is higher than the low priority of the request that was initially turned into a redirect. This another 600ms of CPU time that is prioritized, only leaving 1.4 CPU seconds left to handle expensive requests, or slightly less than 3 RPS of goodput. This causes a feedback loop as well, where the reduced goodput then causes more redirects, but it will eventually converge to a lower amount of goodput on the server than if you served a 503. I do feel that this is turning into bikeshedding. We've been running this algorithm in production for a couple of months now, and I can confirm that it significantly helps deal with crawlers that do not respect our robots.txt, while still allowing crawlers that do to use our site. Suggestions like redirecting to login are not something I've tested in a production environment, so I can't recommend that approach. There are certainly other steps that can be done to address crawlers, and some of them such as |
I do not see any bikeshedding, I just suggested to make the design complete and flexible. I didn't say any "no" to your solution
I agree, but it never answers the question above: "why a real anonymous end users should see a 503 error when the site is heavily being crawled?" My suggestion is to let site admin decide, you could still use 503 if you like, others could use redirection if they like. |
To be clear: if you have no objection, we can merge your solution as-is, and I will complete it and make it configurable to make site admin could choose the behavior they need. |
I think this solution is complete as is, and adding a redirect will have complex and unintended consequences to the algorithm that I detailed in #33951 (comment). In previous production systems I've worked on, including those that used the library this is based on, I've seen slight tweaks to behavior such as the proposed redirect behavior cause feedback loops, which in turn cause worse outages than if the solution hadn't been in place. I always prefer to simplify the behavior in order to make it easy to reason about, and only adjust based upon production experience. |
I do not know why you have objection to let site admin choose the behavior they want. But actually I am sure:
|
I haven't seen your proposed code, but I have prototyped a version of it by serving a redirect to the login page and testing the behavior in local benchmarks in #33951 (comment), and it is worse than the 503 method, because it now needs to also handle the login page request. These results are inline with my intuition of how it works in #33951 (comment). Ultimately, I can only ever see this introducing a regression in both performance and latency in a production environment because it strictly adds work that a server needs to do. If I understand you, you think the added redirect will provide a nicer user experience, while the additional load of the login page will not meaningfully have a regression. Because this algorithm only ever rejects traffic when the server is already overloaded, any load added at this point, even if it is a small, makes this overloaded status worse, and causes more traffic to be rejected. This is a similar, but slightly different, failure mode as a retry storm.
I'd certainly look at these, but I don't the goals of those are necessarily relevant to this PR. My goal in this PR is to keep friction low for anonymous users and crawlers that respect our robots.txt, which is why it only drops traffic if it needs to. |
If you have no objection, I will propose one.
I think I have explained above: the test is just a test, production result speaks.
That's what I suggested above: I will complete it and make it configurable to make site admin could choose the behavior they need. I think I could propose a "Proof-of-Work or rate-limit-token" solution and you could try it in your production to see whether it is better. |
If you're fine merging this one, I would look at a follow up.
I would however need to see the proposed code before I could determine if I could test this in a production environment. Either way, I'm happy to take a look. |
Just like I said: I have no objection to this, public site admin maintainers could help, maybe: @techknowlogick @lunny |
OK, I will propose one (maybe replace this one while achieve the same or better result) |
I would need to see it of course, but I don't think the goals described in #33966 (comment) are not the same as this one. I think they may be complementary, but in our case we have public repos that we want both indexed by search engines and to be accessible by users who are not logged in, and want to keep friction as low as possible for those cases. I would prefer to merge this as is, and address any others in a follow up. |
-> Rate-limit anonymous accesses to expensive pages when the server is overloaded #34167 I think this one covers all the cases (ps: benchmark and local tests aren't the crawler's behavior, so I think production result speaks, but I don't have a public instance) |
The algorithm there is clearly a derivative of the one I am proposing here, but it rejects more requests in practice since it does not enqueue requests. It also takes part of this PR, and introduces it in a new one, while stripping my authorship. This is evident in details such as the default config setting, which is copy pasted from this PR, but really only makes sense in the context of this algorithm. I've done a lot of legwork on this PR, including testing, benchmarking and refining it on production crawler traffic. I'm always open to feedback and ideas on how to improve an implementation, but have found this process very frustrating, from the initial dismissive response, the ongoing bikeshedding, and now the rewrite while stripping my authorship. |
This adds a middleware for overload protection, that is intended to help protect against malicious scrapers. It does this via
codel
, which will perform the following:When a server experiences more traffic than it can handle, this has the effect of keeping latency low for logged in users, while rejecting just enough requests from logged out users to keep the service from being overloaded.
Below are benchmarks showing a system at 100% capacity and 200% capacity in a few different configurations. The 200% capacity is shown to highlight an extreme case. I used hey to simulate the bot traffic:
The concurrency of 10 was chosen from experiments where my local server began to experience higher latency.
Results
Logged in users were given the additional parameter
-H "Cookie: i_like_gitea=<session>
.Tests with
*
were run at the same time, representing a workload with mixed logged in & logged out users. Results are separated to show prioritization, and how logged in users experience a 100ms latency increase under load, compared to the 1.6 seconds logged out users see.