-
Notifications
You must be signed in to change notification settings - Fork 46
Support multiprocessing metrics #179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiprocessing metrics #179
Conversation
Pull Request Test Coverage Report for Build 947
💛 - Coveralls |
Uff. Found a way to test it in a (mostly) clean way. It now fixes metrics leaking into other tests. Making metrics local wouldn't work due to name collision. Also Jaeger config is global anyway so it needs reset through protected members. BTW would be nice to do something to avoid repeated Jaeger configuration in tests - it unnecessarily spams log output with warnings. |
Nice, thanks! It was difficult to make those tests, even that way, so thanks for fixing them. My only question is, when using the custom registry do we loose the metrics included by default?:
PD: Btw, I'm working on migrating to opentelemetry right now. When done, will supersede this implementation, but since I don't know how long will it take, it's still good. |
Those are not supported but not due to registry - instead it's an explicit limitation of the multiprocess mode.
I'd assume that python gc metrics are included in |
I see. I've never used them, and it seems more important to have the metrics be multi-process, so LGTM. |
Prometheus client does not work well in multiprocessing environments (i.e. basically all WSGI servers including Gunicorn).
It's easy to configure it in multiprocess mode tho.
While easy to enable it's not so easy to test. I've tried to reuse existing metrics tests but due to metrics initialization at module import time I'd need to either:
self._value
in each global metric (even more than that for histogram).Which one you'd like more? I'd definitely prefer second option.
Also existing tests are buggy - there are inter-test dependencies, order matters. It's probably due to those global counters - metrics tests check for output that is generated during execution of other tests, notably response
200
for uri/
which is not configured in metrics - and uses completely different service name...