Skip to content

feat: node 22 support, latest rdkafka #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 36 commits into from
Jan 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
9ffc9c6
Revert node engine change (#1002)
GaryWilber Jan 25, 2023
37ec22c
Update to librdkafka 2.0.2 (#996)
GaryWilber Jan 25, 2023
76ec934
Update to librdkafka 2.1.1 (#1008)
GaryWilber May 4, 2023
8429f0c
Stop consume loop thread when disconnecting (#1017)
GaryWilber May 26, 2023
a8c288d
upgrade examples to es6 (#1029)
amilajack Jul 21, 2023
e7e4c6d
Update to librdkafka 2.2.0 (#1033)
GaryWilber Jul 21, 2023
2b7b1bd
Update to librdkafka 2.3.0 (#1047)
riley-pikus Oct 26, 2023
0b5987a
Version 2.18.0 (#1048)
GaryWilber Oct 26, 2023
364eb16
Drop support for end-of-life node versions (#1049)
GaryWilber Apr 16, 2024
d20356f
docs: simplify contributing by adding instructions (#1071)
martijnimhoff Apr 16, 2024
a0648d3
docs: add missing rebalance event to consumer events (#1070)
martijnimhoff Apr 16, 2024
f594d11
Add setToken API for OAuthBearer authentication flow (#1075)
andrewstanovsky Apr 18, 2024
020db59
Update to librdkafka 2.5.0 (#1086)
GaryWilber Jul 16, 2024
eb73d5b
Allow null in consumer commitSync typings (#1082)
Tapppi Jul 16, 2024
d336699
Export event types for use with abstract Client class (#1083)
Tapppi Jul 16, 2024
42ea8af
Update librdkafka to 2.5.3 fixes #1093 (#1095)
ElfoLiNk Sep 26, 2024
8a3d40c
v3.1.1 (#1096)
GaryWilber Sep 26, 2024
fd8039e
Use macos-14 for test workflow (#1088)
GaryWilber Nov 18, 2024
1b33604
Cooperative Rebalance (#1081)
serj026 Nov 18, 2024
db229e5
Update librdkafka to 2.6.0
GaryWilber Nov 18, 2024
236c876
Update librdkafka to 2.6.1 (#1107)
GaryWilber Dec 2, 2024
a950a56
Update librdkafka version in readme (#1108)
GaryWilber Dec 2, 2024
a0ee4af
chore: update build scripts
AVVS Jan 25, 2025
ce40ac8
feat: upgrade to latest version
AVVS Jan 25, 2025
0a286b9
chore: pnpm store path
AVVS Jan 25, 2025
64b399c
chore: another pnpm store path
AVVS Jan 25, 2025
d81c563
chore: rebuild rdkafka
AVVS Jan 25, 2025
14985ce
fix: update dependencies, fix deprecations
AVVS Jan 26, 2025
ca84d43
chore: store dir
AVVS Jan 26, 2025
930ec01
chore: use basedir of store
AVVS Jan 26, 2025
172edcc
chore: typo
AVVS Jan 26, 2025
e3db7bb
chore: cleanup timeouts
AVVS Jan 26, 2025
171f1f4
fix: cleanup consumer messages queue due to possible deadlock
AVVS Jan 26, 2025
7d6ed0a
chore: cleanup printf
AVVS Jan 26, 2025
0b944b2
chore: extra error logs if present
AVVS Jan 26, 2025
0a87dd9
chore: extra debug info, higher timeout
AVVS Jan 26, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,4 @@ package-lock.json
pnpm-lock.yaml
.env
*.tgz
.idea
3 changes: 0 additions & 3 deletions .husky/commit-msg
Original file line number Diff line number Diff line change
@@ -1,4 +1 @@
#!/bin/sh
. "$(dirname "$0")/_/husky.sh"

"`npm x -- mdep bin commitlint`" --edit $1
13 changes: 6 additions & 7 deletions .semaphore/semaphore.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,18 @@ agent:
global_job_config:
prologue:
commands:
- sem-version node 20
- curl -fsSL https://get.pnpm.io/install.sh | env PNPM_VERSION=8.10.2 sh -
- source /home/semaphore/.bashrc
- pnpm config set store-dir=~/.pnpm-store
- sem-version node --lts
- corepack enable
- corepack install --global [email protected]
- checkout
- git submodule init
- git submodule update
- cache restore node-$(checksum pnpm-lock.yaml)
- pnpm i --frozen-lockfile --prefer-offline --ignore-scripts
- cache store node-$(checksum pnpm-lock.yaml) $(pnpm config get store-dir)
- cache store node-$(checksum pnpm-lock.yaml) $(pnpm store path)
env_vars:
- name: BUILD_LIBRDKAFKA
value: '0'
value: '1'

blocks:
- name: verify
Expand All @@ -45,7 +44,7 @@ blocks:
- name: pre-build & publish binaries
matrix:
- env_var: NODE_VER
values: ["18", "20", "21"]
values: ["22", "23"]
- env_var: platform
values: ["-rdkafka", "-debian-rdkafka"]
commands:
Expand Down
11 changes: 10 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ so if you feel something is missing feel free to send a pull request.
* [Contributor Agreement](#contributor-agreement)

[How Can I Contribute?](#how-can-i-contribute)
* [Setting up the repository](#setting-up-the-repository)
* [Reporting Bugs](#reporting-bugs)
* [Suggesting Enhancements](#suggesting-enhancements)
* [Pull Requests](#pull-requests)
Expand All @@ -37,6 +38,14 @@ Not currently required.

## How can I contribute?

### Setting up the repository

To set up the library locally, do the following:

1) Clone this repository.
2) Install librdkafka with `git submodule update --init --recursive`
3) Install the dependencies `npm install`

### Reporting Bugs

Please use __Github Issues__ to report bugs. When filling out an issue report,
Expand Down Expand Up @@ -164,7 +173,7 @@ I began using Visual Studio code to develop on `node-rdkafka`. If you use it you
],
"defines": [],
"macFrameworkPath": [
"/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks"
"/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks"
],
"compilerPath": "/usr/bin/clang",
"cStandard": "c11",
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ check: node_modules/.dirstamp
@$(NODE) util/test-compile.js

e2e: $(E2E_TESTS)
./node_modules/.bin/mocha --exit --timeout 120000 $(TEST_REPORTER) $(E2E_TESTS) $(TEST_OUTPUT)
./node_modules/.bin/mocha --exit --bail --timeout 30000 $(TEST_REPORTER) $(E2E_TESTS) $(TEST_OUTPUT)

define release
NEXT_VERSION=$(shell node -pe 'require("semver").inc("$(VERSION)", "$(1)")')
Expand Down
13 changes: 7 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ I am looking for *your* help to make this project even better! If you're interes

The `node-rdkafka` library is a high-performance NodeJS client for [Apache Kafka](http://kafka.apache.org/) that wraps the native [librdkafka](https://github.com/edenhill/librdkafka) library. All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library.

__This library currently uses `librdkafka` version `2.3.0`.__
__This library currently uses `librdkafka` version `2.6.1`.__

## Reference Docs

Expand All @@ -34,7 +34,7 @@ Play nice; Play fair.
## Requirements

* Apache Kafka >=0.9
* Node.js >=4
* Node.js >=16
* Linux/Mac
* Windows?! See below
* OpenSSL
Expand All @@ -60,7 +60,7 @@ Using Alpine Linux? Check out the [docs](https://github.com/Blizzard/node-rdkafk

### Windows

Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.3.0.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.6.1.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.

Requirements:
* [node-gyp for Windows](https://github.com/nodejs/node-gyp#on-windows)
Expand Down Expand Up @@ -97,7 +97,7 @@ const Kafka = require('node-rdkafka');

## Configuration

You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md)
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.6.1/CONFIGURATION.md)

Configuration keys that have the suffix `_cb` are designated as callbacks. Some
of these keys are informational and you can choose to opt-in (for example, `dr_cb`). Others are callbacks designed to
Expand Down Expand Up @@ -132,7 +132,7 @@ You can also get the version of `librdkafka`
const Kafka = require('node-rdkafka');
console.log(Kafka.librdkafkaVersion);

// #=> 2.3.0
// #=> 2.6.1
```

## Sending Messages
Expand All @@ -145,7 +145,7 @@ const producer = new Kafka.Producer({
});
```

A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md) file described previously.
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.6.1/CONFIGURATION.md) file described previously.

The following example illustrates a list with several `librdkafka` options set.

Expand Down Expand Up @@ -511,6 +511,7 @@ The following table lists events for this API.
|`data` | When using the Standard API consumed messages are emitted in this event. |
|`partition.eof` | When using Standard API and the configuration option `enable.partition.eof` is set, `partition.eof` events are emitted in this event. The event contains `topic`, `partition` and `offset` properties. |
|`warning` | The event is emitted in case of `UNKNOWN_TOPIC_OR_PART` or `TOPIC_AUTHORIZATION_FAILED` errors when consuming in *Flowing mode*. Since the consumer will continue working if the error is still happening, the warning event should reappear after the next metadata refresh. To control the metadata refresh rate set `topic.metadata.refresh.interval.ms` property. Once you resolve the error, you can manually call `getMetadata` to speed up consumer recovery. |
|`rebalance` | The `rebalance` event is emitted when the consumer group is rebalanced. <br><br>This event is only emitted if the `rebalance_cb` configuration is set to a function or set to `true` |
|`disconnected` | The `disconnected` event is emitted when the broker disconnects. <br><br>This event is only emitted when `.disconnect` is called. The wrapper will always try to reconnect otherwise. |
|`ready` | The `ready` event is emitted when the `Consumer` is ready to read messages. |
|`event` | The `event` event is emitted when `librdkafka` reports an event (if you opted in via the `event_cb` option).|
Expand Down
4 changes: 2 additions & 2 deletions ci/build-and-publish.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ set -ex
if [ x"$CI" = x"true" ]; then
cp ~/.env.aws-s3-credentials .env
fi
env IMAGE_TAG=${NODE_VER}${platform} UID=${UID} PNPM_STORE="$(pnpm config get store-dir)" docker-compose up -d
env UID=${UID} docker-compose exec -u ${UID} tester pnpm i --frozen-lockfile --prefer-offline --ignore-scripts
env IMAGE_TAG=${NODE_VER}${platform} UID=${UID} PNPM_STORE="$(dirname `pnpm store path`)" docker-compose up -d
env UID=${UID} docker-compose exec -u ${UID} tester pnpm i --frozen-lockfile --offline --ignore-scripts
env UID=${UID} docker-compose exec -u ${UID} tester pnpm binary:build
env UID=${UID} docker-compose exec -u ${UID} tester pnpm binary:package
env UID=${UID} docker-compose exec -u ${UID} tester pnpm binary:test
Expand Down
10 changes: 5 additions & 5 deletions ci/semantic-release.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ set -ex

if git log --oneline -n 1 | grep -v "chore(release)" > /dev/null; then
touch .env
env UID=${UID} PNPM_STORE=$(pnpm config get store-dir) docker-compose --profile tests up -d
docker-compose exec tester pnpm i
docker-compose exec tester pnpm binary:build
docker-compose exec tester pnpm test
docker-compose exec tester pnpm test:e2e
env UID=${UID} PNPM_STORE=$(dirname `pnpm store path`) docker-compose --profile tests up -d
env UID=${UID} docker-compose exec -u ${UID} tester pnpm i --frozen-lockfile --offline --ignore-scripts
env UID=${UID} docker-compose exec -u ${UID} tester pnpm binary:build
env UID=${UID} docker-compose exec -u ${UID} tester pnpm test
env UID=${UID} docker-compose exec -u ${UID} tester pnpm test:e2e
pnpm semantic-release
else
echo "skipped commit: `git log --oneline -n 1`"
Expand Down
53 changes: 36 additions & 17 deletions config.d.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// ====== Generated from librdkafka 2.3.0 file CONFIGURATION.md ======
// ====== Generated from librdkafka 2.6.1 file CONFIGURATION.md ======
// Code that generated this is a derivative work of the code from Nam Nguyen
// https://gist.github.com/ntgn81/066c2c8ec5b4238f85d1e9168a04e3fb

Expand Down Expand Up @@ -620,12 +620,33 @@ export interface GlobalConfig {
"client.rack"?: string;

/**
* Controls how the client uses DNS lookups. By default, when the lookup returns multiple IP addresses for a hostname, they will all be attempted for connection before the connection is considered failed. This applies to both bootstrap and advertised servers. If the value is set to `resolve_canonical_bootstrap_servers_only`, each entry will be resolved and expanded into a list of canonical names. NOTE: Default here is different from the Java client's default behavior, which connects only to the first IP address returned for a hostname.
* The backoff time in milliseconds before retrying a protocol request, this is the first backoff time, and will be backed off exponentially until number of retries is exhausted, and it's capped by retry.backoff.max.ms.
*
* @default 100
*/
"retry.backoff.ms"?: number;

/**
* The max backoff time in milliseconds before retrying a protocol request, this is the atmost backoff allowed for exponentially backed off requests.
*
* @default 1000
*/
"retry.backoff.max.ms"?: number;

/**
* Controls how the client uses DNS lookups. By default, when the lookup returns multiple IP addresses for a hostname, they will all be attempted for connection before the connection is considered failed. This applies to both bootstrap and advertised servers. If the value is set to `resolve_canonical_bootstrap_servers_only`, each entry will be resolved and expanded into a list of canonical names. **WARNING**: `resolve_canonical_bootstrap_servers_only` must only be used with `GSSAPI` (Kerberos) as `sasl.mechanism`, as it's the only purpose of this configuration value. **NOTE**: Default here is different from the Java client's default behavior, which connects only to the first IP address returned for a hostname.
*
* @default use_all_dns_ips
*/
"client.dns.lookup"?: 'use_all_dns_ips' | 'resolve_canonical_bootstrap_servers_only';

/**
* Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client
*
* @default true
*/
"enable.metrics.push"?: boolean;

/**
* Enables or disables `event.*` emitting.
*
Expand Down Expand Up @@ -703,20 +724,6 @@ export interface ProducerGlobalConfig extends GlobalConfig {
*/
"retries"?: number;

/**
* The backoff time in milliseconds before retrying a protocol request, this is the first backoff time, and will be backed off exponentially until number of retries is exhausted, and it's capped by retry.backoff.max.ms.
*
* @default 100
*/
"retry.backoff.ms"?: number;

/**
* The max backoff time in milliseconds before retrying a protocol request, this is the atmost backoff allowed for exponentially backed off requests.
*
* @default 1000
*/
"retry.backoff.max.ms"?: number;

/**
* The threshold of outstanding not yet transmitted broker requests needed to backpressure the producer's message accumulator. If the number of not yet transmitted requests equals or exceeds this number, produce request creation that would have otherwise been triggered (for example, in accordance with linger.ms) will be delayed. A lower number yields larger and more effective batches. A higher value can improve latency when using compression on slow machines.
*
Expand Down Expand Up @@ -810,12 +817,24 @@ export interface ConsumerGlobalConfig extends GlobalConfig {
"heartbeat.interval.ms"?: number;

/**
* Group protocol type. NOTE: Currently, the only supported group protocol type is `consumer`.
* Group protocol type for the `classic` group protocol. NOTE: Currently, the only supported group protocol type is `consumer`.
*
* @default consumer
*/
"group.protocol.type"?: string;

/**
* Group protocol to use. Use `classic` for the original protocol and `consumer` for the new protocol introduced in KIP-848. Available protocols: classic or consumer. Default is `classic`, but will change to `consumer` in next releases.
*
* @default classic
*/
"group.protocol"?: 'classic' | 'consumer';

/**
* Server side assignor to use. Keep it null to make server select a suitable assignor for the group. Available assignors: uniform or range. Default is null
*/
"group.remote.assignor"?: string;

/**
* How often to query for the current client group coordinator. If the currently assigned coordinator is down the configured query interval will be divided by ten to more quickly recover in case of coordinator reassignment.
*
Expand Down
2 changes: 1 addition & 1 deletion deps/librdkafka
11 changes: 5 additions & 6 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
version: '3'

services:
zookeeper:
profiles: ["tests"]
Expand Down Expand Up @@ -27,13 +25,13 @@ services:
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_DEFAULT_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_NUM_NETWORK_THREADS: 4
KAFKA_NUM_IO_THREADS: 4
KAFKA_BACKGROUND_THREADS: 4
KAFKA_NUM_NETWORK_THREADS: 2
KAFKA_NUM_IO_THREADS: 2
KAFKA_BACKGROUND_THREADS: 2
KAFKA_HEAP_OPTS: "-Xmx512m -Xms512m"

tester:
image: makeomatic/node:${IMAGE_TAG:-16-rdkafka-tester}
image: makeomatic/node:${IMAGE_TAG:-22-rdkafka-tester}
volumes:
- ${PWD}:/src
- ${PWD}/ci/npmrc:/usr/local/etc/npmrc:ro
Expand All @@ -43,6 +41,7 @@ services:
env_file:
- .env
environment:
- BUILD_LIBRDKAFKA=0
- UV_THREADPOOL_SIZE=16
- CI=true
- UID=${UID:-1000}
Loading