Skip to content

Commit 0dc16a4

Browse files
authored
Update prom rw exporter (#1359)
1 parent 26d3343 commit 0dc16a4

22 files changed

+1684
-0
lines changed

.flake8

+1
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ exclude =
1616
target
1717
__pycache__
1818
exporter/opentelemetry-exporter-jaeger/src/opentelemetry/exporter/jaeger/gen/
19+
exporter/opentelemetry-exporter-prometheus-remote-write/src/opentelemetry/exporter/prometheus_remote_write/gen/
1920
exporter/opentelemetry-exporter-jaeger/build/*
2021
docs/examples/opentelemetry-example-app/src/opentelemetry_example_app/grpc/gen/
2122
docs/examples/opentelemetry-example-app/build/*

CHANGELOG.md

+3
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
3131
([#1413](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1413))
3232
- `opentelemetry-instrumentation-pyramid` Add support for regular expression matching and sanitization of HTTP headers.
3333
([#1414](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1414))
34+
- Add metric exporter for Prometheus Remote Write
35+
([#1359](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1359))
3436

3537
### Fixed
3638

@@ -62,6 +64,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
6264
- Add metric instrumentation in starlette
6365
([#1327](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1327))
6466

67+
6568
### Fixed
6669

6770
- `opentelemetry-instrumentation-boto3sqs` Make propagation compatible with other SQS instrumentations, add 'messaging.url' span attribute, and fix missing package dependencies.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
OpenTelemetry Prometheus Remote Write Exporter
2+
==============================================
3+
4+
|pypi|
5+
6+
.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-prometheus-remote-write.svg
7+
:target: https://pypi.org/project/opentelemetry-exporter-prometheus-remote-write/
8+
9+
This package contains an exporter to send metrics from the OpenTelemetry Python SDK directly to a Prometheus Remote Write integrated backend
10+
(such as Cortex or Thanos) without having to run an instance of the Prometheus server.
11+
12+
13+
Installation
14+
------------
15+
16+
::
17+
18+
pip install opentelemetry-exporter-prometheus-remote-write
19+
20+
21+
.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
22+
.. _Prometheus Remote Write integrated backend: https://prometheus.io/docs/operating/integrations/
23+
24+
25+
References
26+
----------
27+
28+
* `OpenTelemetry Project <https://opentelemetry.io/>`_
29+
* `Prometheus Remote Write Integration <https://prometheus.io/docs/operating/integrations/>`_
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
FROM python:3.8
2+
3+
RUN apt-get update -y && apt-get install libsnappy-dev -y
4+
5+
WORKDIR /code
6+
COPY . .
7+
8+
RUN pip install -e .
9+
RUN pip install -r ./examples/requirements.txt
10+
11+
CMD ["python", "./examples/sampleapp.py"]
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
# Prometheus Remote Write Exporter Example
2+
This example uses [Docker Compose](https://docs.docker.com/compose/) to set up:
3+
4+
1. A Python program that creates 5 instruments with 5 unique
5+
aggregators and a randomized load generator
6+
2. An instance of [Cortex](https://cortexmetrics.io/) to receive the metrics
7+
data
8+
3. An instance of [Grafana](https://grafana.com/) to visualizse the exported
9+
data
10+
11+
## Requirements
12+
* Have Docker Compose [installed](https://docs.docker.com/compose/install/)
13+
14+
*Users do not need to install Python as the app will be run in the Docker Container*
15+
16+
## Instructions
17+
1. Run `docker-compose up -d` in the the `examples/` directory
18+
19+
The `-d` flag causes all services to run in detached mode and frees up your
20+
terminal session. This also causes no logs to show up. Users can attach themselves to the service's logs manually using `docker logs ${CONTAINER_ID} --follow`
21+
22+
2. Log into the Grafana instance at [http://localhost:3000](http://localhost:3000)
23+
* login credentials are `username: admin` and `password: admin`
24+
* There may be an additional screen on setting a new password. This can be skipped and is optional
25+
26+
3. Navigate to the `Data Sources` page
27+
* Look for a gear icon on the left sidebar and select `Data Sources`
28+
29+
4. Add a new Prometheus Data Source
30+
* Use `http://cortex:9009/api/prom` as the URL
31+
* (OPTIONAl) set the scrape interval to `2s` to make updates appear quickly
32+
* click `Save & Test`
33+
34+
5. Go to `Metrics Explore` to query metrics
35+
* Look for a compass icon on the left sidebar
36+
* click `Metrics` for a dropdown list of all the available metrics
37+
* (OPTIONAL) Adjust time range by clicking the `Last 6 hours` button on the upper right side of the graph
38+
* (OPTIONAL) Set up auto-refresh by selecting an option under the dropdown next to the refresh button on the upper right side of the graph
39+
* Click the refresh button and data should show up on the graph
40+
41+
6. Shutdown the services when finished
42+
* Run `docker-compose down` in the examples directory
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
# This Cortex Config is copied from the Cortex Project documentation
2+
# Source: https://github.com/cortexproject/cortex/blob/master/docs/configuration/single-process-config.yaml
3+
4+
# Configuration for running Cortex in single-process mode.
5+
# This configuration should not be used in production.
6+
# It is only for getting started and development.
7+
8+
# Disable the requirement that every request to Cortex has a
9+
# X-Scope-OrgID header. `fake` will be substituted in instead.
10+
# pylint: skip-file
11+
auth_enabled: false
12+
13+
server:
14+
http_listen_port: 9009
15+
16+
# Configure the server to allow messages up to 100MB.
17+
grpc_server_max_recv_msg_size: 104857600
18+
grpc_server_max_send_msg_size: 104857600
19+
grpc_server_max_concurrent_streams: 1000
20+
21+
distributor:
22+
shard_by_all_labels: true
23+
pool:
24+
health_check_ingesters: true
25+
26+
ingester_client:
27+
grpc_client_config:
28+
# Configure the client to allow messages up to 100MB.
29+
max_recv_msg_size: 104857600
30+
max_send_msg_size: 104857600
31+
use_gzip_compression: true
32+
33+
ingester:
34+
# We want our ingesters to flush chunks at the same time to optimise
35+
# deduplication opportunities.
36+
spread_flushes: true
37+
chunk_age_jitter: 0
38+
39+
walconfig:
40+
wal_enabled: true
41+
recover_from_wal: true
42+
wal_dir: /tmp/cortex/wal
43+
44+
lifecycler:
45+
# The address to advertise for this ingester. Will be autodiscovered by
46+
# looking up address on eth0 or en0; can be specified if this fails.
47+
# address: 127.0.0.1
48+
49+
# We want to start immediately and flush on shutdown.
50+
join_after: 0
51+
min_ready_duration: 0s
52+
final_sleep: 0s
53+
num_tokens: 512
54+
tokens_file_path: /tmp/cortex/wal/tokens
55+
56+
# Use an in memory ring store, so we don't need to launch a Consul.
57+
ring:
58+
kvstore:
59+
store: inmemory
60+
replication_factor: 1
61+
62+
# Use local storage - BoltDB for the index, and the filesystem
63+
# for the chunks.
64+
schema:
65+
configs:
66+
- from: 2019-07-29
67+
store: boltdb
68+
object_store: filesystem
69+
schema: v10
70+
index:
71+
prefix: index_
72+
period: 1w
73+
74+
storage:
75+
boltdb:
76+
directory: /tmp/cortex/index
77+
78+
filesystem:
79+
directory: /tmp/cortex/chunks
80+
81+
delete_store:
82+
store: boltdb
83+
84+
purger:
85+
object_store_type: filesystem
86+
87+
frontend_worker:
88+
# Configure the frontend worker in the querier to match worker count
89+
# to max_concurrent on the queriers.
90+
match_max_concurrent: true
91+
92+
# Configure the ruler to scan the /tmp/cortex/rules directory for prometheus
93+
# rules: https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules
94+
ruler:
95+
enable_api: true
96+
enable_sharding: false
97+
storage:
98+
type: local
99+
local:
100+
directory: /tmp/cortex/rules
101+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# Copyright The OpenTelemetry Authors
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
version: "3.8"
16+
17+
services:
18+
cortex:
19+
image: quay.io/cortexproject/cortex:v1.5.0
20+
command:
21+
- -config.file=./config/cortex-config.yml
22+
volumes:
23+
- ./cortex-config.yml:/config/cortex-config.yml:ro
24+
ports:
25+
- 9009:9009
26+
grafana:
27+
image: grafana/grafana:latest
28+
ports:
29+
- 3000:3000
30+
sample_app:
31+
build:
32+
context: ../
33+
dockerfile: ./examples/Dockerfile
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
psutil
2+
protobuf>=3.13.0
3+
requests>=2.25.0
4+
python-snappy>=0.5.4
5+
opentelemetry-api
6+
opentelemetry-sdk
7+
opentelemetry-proto
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
# Copyright The OpenTelemetry Authors
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
import logging
16+
import random
17+
import sys
18+
import time
19+
from logging import INFO
20+
21+
import psutil
22+
23+
from opentelemetry import metrics
24+
from opentelemetry.exporter.prometheus_remote_write import (
25+
PrometheusRemoteWriteMetricsExporter,
26+
)
27+
from opentelemetry.metrics import Observation
28+
from opentelemetry.sdk.metrics import MeterProvider
29+
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
30+
31+
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
32+
logger = logging.getLogger(__name__)
33+
34+
35+
testing_labels = {"environment": "testing"}
36+
37+
exporter = PrometheusRemoteWriteMetricsExporter(
38+
endpoint="http://cortex:9009/api/prom/push",
39+
headers={"X-Scope-Org-ID": "5"},
40+
)
41+
reader = PeriodicExportingMetricReader(exporter, 1000)
42+
provider = MeterProvider(metric_readers=[reader])
43+
metrics.set_meter_provider(provider)
44+
meter = metrics.get_meter(__name__)
45+
46+
47+
# Callback to gather cpu usage
48+
def get_cpu_usage_callback(observer):
49+
for (number, percent) in enumerate(psutil.cpu_percent(percpu=True)):
50+
labels = {"cpu_number": str(number)}
51+
yield Observation(percent, labels)
52+
53+
54+
# Callback to gather RAM usage
55+
def get_ram_usage_callback(observer):
56+
ram_percent = psutil.virtual_memory().percent
57+
yield Observation(ram_percent, {})
58+
59+
60+
requests_counter = meter.create_counter(
61+
name="requests",
62+
description="number of requests",
63+
unit="1",
64+
)
65+
66+
request_min_max = meter.create_counter(
67+
name="requests_min_max",
68+
description="min max sum count of requests",
69+
unit="1",
70+
)
71+
72+
request_last_value = meter.create_counter(
73+
name="requests_last_value",
74+
description="last value number of requests",
75+
unit="1",
76+
)
77+
78+
requests_active = meter.create_up_down_counter(
79+
name="requests_active",
80+
description="number of active requests",
81+
unit="1",
82+
)
83+
84+
meter.create_observable_counter(
85+
callbacks=[get_ram_usage_callback],
86+
name="ram_usage",
87+
description="ram usage",
88+
unit="1",
89+
)
90+
91+
meter.create_observable_up_down_counter(
92+
callbacks=[get_cpu_usage_callback],
93+
name="cpu_percent",
94+
description="per-cpu usage",
95+
unit="1",
96+
)
97+
98+
request_latency = meter.create_histogram("request_latency")
99+
100+
# Load generator
101+
num = random.randint(0, 1000)
102+
while True:
103+
# counters
104+
requests_counter.add(num % 131 + 200, testing_labels)
105+
request_min_max.add(num % 181 + 200, testing_labels)
106+
request_last_value.add(num % 101 + 200, testing_labels)
107+
108+
# updown counter
109+
requests_active.add(num % 7231 + 200, testing_labels)
110+
111+
request_latency.record(num % 92, testing_labels)
112+
logger.log(level=INFO, msg="completed metrics collection cycle")
113+
time.sleep(1)
114+
num += 9791
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
opentelemetry
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
## Instructions
2+
1. Install protobuf tools. Can use your package manager or download from [GitHub](https://github.com/protocolbuffers/protobuf/releases/tag/v21.7)
3+
2. Run `generate-proto-py.sh` from inside the `proto/` directory

0 commit comments

Comments
 (0)