This library allows tracing LLM requests and logging of messages made by the VertexAI Python API library.
If your application is already instrumented with OpenTelemetry, add this package to your requirements.
pip install opentelemetry-instrumentation-vertexai
If you don't have an VertexAI application, yet, try our examples.
Check out zero-code example for a quick start.
This section describes how to set up VertexAI instrumentation if you're setting OpenTelemetry up manually. Check out the manual example for more details.
When using the instrumentor, all clients will automatically trace VertexAI chat completion operations. You can also optionally capture prompts and completions as log events.
Make sure to configure OpenTelemetry tracing, logging, and events to capture all telemetry emitted by the instrumentation.
from opentelemetry.instrumentation.vertexai import VertexAIInstrumentor
from vertexai.generative_models import GenerativeModel
VertexAIInstrumentor().instrument()
vertexai.init()
model = GenerativeModel("gemini-1.5-flash-002")
response = model.generate_content("Write a short poem on OpenTelemetry.")
Message content such as the contents of the prompt, completion, function arguments and return values are not captured by default. To capture message content as log events, set the environment variable OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT to true.
To uninstrument clients, call the uninstrument method:
from opentelemetry.instrumentation.vertexai import VertexAIInstrumentor
VertexAIInstrumentor().instrument()
# ...
# Uninstrument all clients
VertexAIInstrumentor().uninstrument()