You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe your environment
celery running using "multi" in systemd service file which forks workers. also using solarwinds opentelemetry wrapper which complicates it a bit more. this might trace back to them a bit, but I do think the issue lies more with otel and celery multi ( forking ). I wasn't sure if I should submit this bug report to celery, otel, or solarwinds, but kind of seems like all three.
Steps to reproduce
run opentelemetry-instrument celery -A tasks multi start worker --concurrency=10 or a similar setup......
What is the expected behavior?
trace as normal
What is the actual behavior?
only a few traces during initial startup, but nothing about celery tasks, database, etc. suspect that the new forked process is unaware that it was launched with opentelemetry-instrumentation ( ie. opentelemetry.instrumentation.auto_instrumentation )
Additional context
running opentelemetry-instrument celery -A tasks worker --concurrency=10 and using Type=simple vs forking allows this to work. this means that systemd sort of fires and forgets celery starting up, which is an okay compromise workaround, but it may make it harder to detect that the service doesn't start properly. monitoring celery/services uptime on linux is a bigger fish anyways.
an ex .service file for reference that works around the issue
Describe your environment
celery running using "multi" in systemd service file which forks workers. also using solarwinds opentelemetry wrapper which complicates it a bit more. this might trace back to them a bit, but I do think the issue lies more with otel and celery multi ( forking ). I wasn't sure if I should submit this bug report to celery, otel, or solarwinds, but kind of seems like all three.
these celery doc files might help explain what I mean by "multi"
https://docs.celeryq.dev/en/stable/reference/celery.bin.multi.html
https://docs.celeryq.dev/en/stable/userguide/daemonizing.html#usage-systemd
Steps to reproduce
run
opentelemetry-instrument celery -A tasks multi start worker --concurrency=10
or a similar setup......What is the expected behavior?
trace as normal
What is the actual behavior?
only a few traces during initial startup, but nothing about celery tasks, database, etc. suspect that the new forked process is unaware that it was launched with opentelemetry-instrumentation ( ie. opentelemetry.instrumentation.auto_instrumentation )
Additional context
running opentelemetry-instrument celery -A tasks worker --concurrency=10 and using Type=simple vs forking allows this to work. this means that systemd sort of fires and forgets celery starting up, which is an okay compromise workaround, but it may make it harder to detect that the service doesn't start properly. monitoring celery/services uptime on linux is a bigger fish anyways.
an ex .service file for reference that works around the issue
This is probably something like the issues with Gunicorn where processes fork and using opentelemetry-instrument as a shell wrapper won't cut it. https://opentelemetry-python.readthedocs.io/en/latest/examples/fork-process-model/README.html
I've tried the solution mentioned on https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/celery/celery.html with no luck in combination with running open-telemetry ... hat wobble
The text was updated successfully, but these errors were encountered: