Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenTelemetry: fallback to default storage when not on a Vert.x thread #72

Merged
merged 1 commit into from
Dec 12, 2023

Conversation

tsegismont
Copy link
Contributor

The VertxContextStorageProvider is chosen by OpenTelemetry via the service loader mechanism.

While it is mandatory to use context for storage of spans created on Vert.x threads, there may be parts of the application implemented without Vert.x.

In this case, the storage provider fails with NPE. Instead, it is better to fall back to the default storage mechanism, which uses thread local variables.

The VertxContextStorageProvider is chosen by OpenTelemetry via the service loader mechanism.

While it is mandatory to use context for storage of spans created on Vert.x threads, there may be parts of the application implemented without Vert.x.

In this case, the storage provider fails with NPE. Instead, it is better to fall back to the default storage mechanism, which uses thread local variables.

Signed-off-by: Thomas Segismont <[email protected]>
@tsegismont tsegismont requested a review from vietj December 12, 2023 10:58
@tsegismont tsegismont added this to the 4.5.1 milestone Dec 12, 2023
@tsegismont tsegismont merged commit 1bdeb88 into eclipse-vertx:4.x Dec 12, 2023
@tsegismont tsegismont deleted the backport-71 branch December 12, 2023 13:29
tsegismont added a commit to tsegismont/eventing-kafka-broker that referenced this pull request Jan 13, 2025
In Vert.x 4.5.1, there was a fix to use the OTel default context storage when the Vert.x context storage provider is not invoked on a Vert.x thread.
See eclipse-vertx/vertx-tracing#72

Since LoomKafkaProducer invokes the tracer on a virtual thread (or a worker thread), the sending task must be wrapped with the OTel current context.
Otherwise, tracing data will be lost at this point.

OTel provides an ExecutorService that does just that, so this commit refactors the producer to use an executor service instead of a queue plus manual polling.

Important: note that with this implementation, a virtual thread is created for each record, which is different from sending them one after the other with a single thread.
matzew pushed a commit to matzew/eventing-kafka-broker that referenced this pull request Jan 14, 2025
In Vert.x 4.5.1, there was a fix to use the OTel default context storage when the Vert.x context storage provider is not invoked on a Vert.x thread.
See eclipse-vertx/vertx-tracing#72

Since LoomKafkaProducer invokes the tracer on a virtual thread (or a worker thread), the sending task must be wrapped with the OTel current context.
Otherwise, tracing data will be lost at this point.

OTel provides an ExecutorService that does just that, so this commit refactors the producer to use an executor service instead of a queue plus manual polling.

Important: note that with this implementation, a virtual thread is created for each record, which is different from sending them one after the other with a single thread.
knative-prow bot pushed a commit to knative-extensions/eventing-kafka-broker that referenced this pull request Jan 14, 2025
* Bump quarkus BOM to get newer vert.x

Signed-off-by: Matthias Wessendorf <[email protected]>

* Use unwrapped verxInternal's tracer()

Signed-off-by: Matthias Wessendorf <[email protected]>

* Revert "Use unwrapped verxInternal's tracer()"

This reverts commit dfc8b4a.

* Fix ReceiverVerticleTracingTest#traceIsPropagated

In Vert.x 4.5.1, there was a fix to use the OTel default context storage when the Vert.x context storage provider is not invoked on a Vert.x thread.
See eclipse-vertx/vertx-tracing#72

Since LoomKafkaProducer invokes the tracer on a virtual thread (or a worker thread), the sending task must be wrapped with the OTel current context.
Otherwise, tracing data will be lost at this point.

OTel provides an ExecutorService that does just that, so this commit refactors the producer to use an executor service instead of a queue plus manual polling.

Important: note that with this implementation, a virtual thread is created for each record, which is different from sending them one after the other with a single thread.

---------

Signed-off-by: Matthias Wessendorf <[email protected]>
Co-authored-by: Thomas Segismont <[email protected]>
matzew added a commit to matzew/eventing-kafka-broker that referenced this pull request Feb 4, 2025
* Bump quarkus BOM to get newer vert.x

Signed-off-by: Matthias Wessendorf <[email protected]>

* Use unwrapped verxInternal's tracer()

Signed-off-by: Matthias Wessendorf <[email protected]>

* Revert "Use unwrapped verxInternal's tracer()"

This reverts commit dfc8b4a.

* Fix ReceiverVerticleTracingTest#traceIsPropagated

In Vert.x 4.5.1, there was a fix to use the OTel default context storage when the Vert.x context storage provider is not invoked on a Vert.x thread.
See eclipse-vertx/vertx-tracing#72

Since LoomKafkaProducer invokes the tracer on a virtual thread (or a worker thread), the sending task must be wrapped with the OTel current context.
Otherwise, tracing data will be lost at this point.

OTel provides an ExecutorService that does just that, so this commit refactors the producer to use an executor service instead of a queue plus manual polling.

Important: note that with this implementation, a virtual thread is created for each record, which is different from sending them one after the other with a single thread.

---------

Signed-off-by: Matthias Wessendorf <[email protected]>
Co-authored-by: Thomas Segismont <[email protected]>
matzew pushed a commit to matzew/eventing-kafka-broker that referenced this pull request Apr 2, 2025
In Vert.x 4.5.1, there was a fix to use the OTel default context storage when the Vert.x context storage provider is not invoked on a Vert.x thread.
See eclipse-vertx/vertx-tracing#72

Since LoomKafkaProducer invokes the tracer on a virtual thread (or a worker thread), the sending task must be wrapped with the OTel current context.
Otherwise, tracing data will be lost at this point.

OTel provides an ExecutorService that does just that, so this commit refactors the producer to use an executor service instead of a queue plus manual polling.

Important: note that with this implementation, a virtual thread is created for each record, which is different from sending them one after the other with a single thread.
openshift-merge-bot bot pushed a commit to openshift-knative/eventing-kafka-broker that referenced this pull request Apr 8, 2025
…1643)

* Some generated update for THIRD-PARTY.txt

Signed-off-by: Matthias Wessendorf <[email protected]>

* Align more of libs from 3.8.6-SP3 release

Signed-off-by: Matthias Wessendorf <[email protected]>

* LoomKafkaProducer|Consumer let the background thread finish itself (knative-extensions#4013)

* Fix ReceiverVerticleTracingTest#traceIsPropagated

In Vert.x 4.5.1, there was a fix to use the OTel default context storage when the Vert.x context storage provider is not invoked on a Vert.x thread.
See eclipse-vertx/vertx-tracing#72

Since LoomKafkaProducer invokes the tracer on a virtual thread (or a worker thread), the sending task must be wrapped with the OTel current context.
Otherwise, tracing data will be lost at this point.

OTel provides an ExecutorService that does just that, so this commit refactors the producer to use an executor service instead of a queue plus manual polling.

Important: note that with this implementation, a virtual thread is created for each record, which is different from sending them one after the other with a single thread.

---------

Signed-off-by: Matthias Wessendorf <[email protected]>
Co-authored-by: Marek Schmidt <[email protected]>
Co-authored-by: Thomas Segismont <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant