Skip to content

KeyError('sentry-monitor-start-timestamp-s') #3277

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Gophersen opened this issue Jul 10, 2024 · 6 comments · Fixed by #3278
Closed

KeyError('sentry-monitor-start-timestamp-s') #3277

Gophersen opened this issue Jul 10, 2024 · 6 comments · Fixed by #3278

Comments

@Gophersen
Copy link
Contributor

Gophersen commented Jul 10, 2024

How do you use Sentry?

Sentry Saas (sentry.io)

Version

2.8.0

Steps to Reproduce

I had a task at the company to track Celery beat tasks in Sentry. This task has been completed, and everything is working correctly. However, with each execution of the task, an unusual exception is displayed in Sentry. This happens once every minute.
Since I set the monitor_beat_tasks to True, this event has been occurring.

This exception is handled, but It is not interesting that it is constantly displayed on sentry.

Interestingly, this does not happen in the local environment(in my laptop)! and just we have this problem on production.
My local system:
Ubuntu 22.04
Python 3.10.12
Celery 5.3.6
sentry_sdk 2.8.0

In production we using Docker and Kubernetes, we are using Python:3.10 image and packages versions are same as my local!

Expected Result

Don't show this exception on sentry.
My task is working well and this error has made our sentry busy.

Actual Result

Screenshot from 2024-07-10 22-42-40

Screenshot from 2024-07-10 22-43-31

Logs of celery worker in production:

[2024-07-10 20:51:00,088: INFO/MainProcess] Task send_concierge_sale_to_assistant_offer_task[0349add3-713c-4ff0-83b8-fecb44652ea4] received
[2024-07-10 20:51:04,561: ERROR/ForkPoolWorker-4] Signal handler <function crons_task_success at 0x7f83cc319cf0> raised: KeyError('sentry-monitor-start-timestamp-s')
2024-07-10T20:51:04.562478564Z Traceback (most recent call last):
2024-07-10T20:51:04.562484644Z   File "/usr/local/lib/python3.10/site-packages/celery/utils/dispatch/signal.py", line 276, in send
2024-07-10T20:51:04.562488411Z     response = receiver(signal=self, sender=sender, **named)
2024-07-10T20:51:04.562493177Z   File "/usr/local/lib/python3.10/site-packages/sentry_sdk/integrations/celery/beat.py", line 255, in crons_task_success
2024-07-10T20:51:04.562497673Z     start_timestamp_s = float(headers["sentry-monitor-start-timestamp-s"])
2024-07-10T20:51:04.562501032Z KeyError: 'sentry-monitor-start-timestamp-s'
[2024-07-10 20:51:04,565: INFO/ForkPoolWorker-4] Task send_concierge_sale_to_assistant_offer_task[0349add3-713c-4ff0-83b8-fecb44652ea4] succeeded in 4.470669243019074s: None
@sl0thentr0py
Copy link
Member

your fix is ok, but I think there's something else going wrong because ideally the headers should be set when queuing the task here

@szokeasaurusrex
Copy link
Member

@Mohsen-Khodabakhshi, by any chance do you see any other errors in your logs before the KeyError which might help us diagnose what went wrong in the place @sl0thentr0py linked?

@Gophersen
Copy link
Contributor Author

Gophersen commented Jul 11, 2024

@szokeasaurusrex It's all logs between receive task util success in my celery worker and the error happens in between:

[2024-07-11 10:30:00,034: INFO/MainProcess] Task send_concierge_sale_to_assistant_offer_task[716fb2ae-5a1c-4a1a-a7f1-4a169dcb0f52] received
[2024-07-11 10:30:04,518: ERROR/ForkPoolWorker-4] Signal handler <function crons_task_success at 0x7f2513931cf0> raised: KeyError('sentry-monitor-start-timestamp-s')
2024-07-11T10:30:04.526915825Z Traceback (most recent call last):
2024-07-11T10:30:04.526925943Z   File "/usr/local/lib/python3.10/site-packages/celery/utils/dispatch/signal.py", line 276, in send
2024-07-11T10:30:04.526932317Z     response = receiver(signal=self, sender=sender, **named)
  File "/usr/local/lib/python3.10/site-packages/sentry_sdk/integrations/celery/beat.py", line 255, in crons_task_success
2024-07-11T10:30:04.526944448Z     start_timestamp_s = float(headers["sentry-monitor-start-timestamp-s"])
KeyError: 'sentry-monitor-start-timestamp-s'
2024-07-11T10:30:04.526954480Z [2024-07-11 10:30:04,521: INFO/ForkPoolWorker-4] Task send_concierge_sale_to_assistant_offer_task[716fb2ae-5a1c-4a1a-a7f1-4a169dcb0f52] succeeded in 4.481693588662893s: None

And I called sentry_sdk in my celery main file like this:

from celery import Celery
from celery.schedules import crontab
from sentry_sdk.scrubber import EventScrubber
from sentry_sdk.integrations.celery import CeleryIntegration

from core.settings import settings
import sentry_sdk

from domains.common.services.sentry import SENTRY_DENYLIST

sentry_sdk.init(
    dsn=settings.SENTRY_URL,
    enable_tracing=True,
    traces_sample_rate=1.0,
    event_scrubber=EventScrubber(denylist=SENTRY_DENYLIST),
    integrations=[CeleryIntegration(
        monitor_beat_tasks=True,
    )]
)

celery = Celery(
    __name__,
    include=["domains.concierge_sale.tasks"],
)
celery.conf.result_backend = settings.TRADE_CELERY_RESULT_BACKEND_URL
celery.conf.broker_url = settings.TRADE_CELERY_BROKER_URL
celery.conf.beat_schedule = {
    "shop-analytics-data-every-day": {
        "task": "send_concierge_sale_to_assistant_offer_task",
        "schedule": crontab(minute="*/1"),
    },
}
celery.conf.imports = ["domains.concierge_sale.tasks"]
celery.conf.result_backend_transport_options = {
    "global_keyprefix": settings.TRADE_CELERY_TASK_DEFAULT_QUEUE,
    "visibility_timeout": 172800,
}
celery.conf.celery_task_default_queue = settings.TRADE_CELERY_TASK_DEFAULT_QUEUE

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Jul 11, 2024
@szokeasaurusrex
Copy link
Member

It's all logs between receive task util success in my celery worker and the error happens in between

@Mohsen-Khodabakhshi the error message we are looking for would be in the logs for the process creating the tasks (probably the Beat scheduler logs in your case), not in the Celery worker logs.

Could you check the Beat scheduler logs and see if you can find any error message containing the following?

Internal error in sentry_sdk

@Gophersen
Copy link
Contributor Author

@szokeasaurusrex There is no error/exception in our Celery beat pod:
Screenshot from 2024-07-11 22-32-47

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Jul 11, 2024
@szokeasaurusrex
Copy link
Member

@Mohsen-Khodabakhshi thanks for the clarification

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants