Skip to content

Commit b4bbf3d

Browse files
authored
feat: node 22 support, latest rdkafka
2 parents 017e47c + 0a87dd9 commit b4bbf3d

36 files changed

+4915
-4028
lines changed
File renamed without changes.

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -18,3 +18,4 @@ package-lock.json
1818
pnpm-lock.yaml
1919
.env
2020
*.tgz
21+
.idea

.husky/commit-msg

-3
Original file line numberDiff line numberDiff line change
@@ -1,4 +1 @@
1-
#!/bin/sh
2-
. "$(dirname "$0")/_/husky.sh"
3-
41
"`npm x -- mdep bin commitlint`" --edit $1

.semaphore/semaphore.yml

+6-7
Original file line numberDiff line numberDiff line change
@@ -8,19 +8,18 @@ agent:
88
global_job_config:
99
prologue:
1010
commands:
11-
- sem-version node 20
12-
- curl -fsSL https://get.pnpm.io/install.sh | env PNPM_VERSION=8.10.2 sh -
13-
- source /home/semaphore/.bashrc
14-
- pnpm config set store-dir=~/.pnpm-store
11+
- sem-version node --lts
12+
- corepack enable
13+
- corepack install --global [email protected]
1514
- checkout
1615
- git submodule init
1716
- git submodule update
1817
- cache restore node-$(checksum pnpm-lock.yaml)
1918
- pnpm i --frozen-lockfile --prefer-offline --ignore-scripts
20-
- cache store node-$(checksum pnpm-lock.yaml) $(pnpm config get store-dir)
19+
- cache store node-$(checksum pnpm-lock.yaml) $(pnpm store path)
2120
env_vars:
2221
- name: BUILD_LIBRDKAFKA
23-
value: '0'
22+
value: '1'
2423

2524
blocks:
2625
- name: verify
@@ -45,7 +44,7 @@ blocks:
4544
- name: pre-build & publish binaries
4645
matrix:
4746
- env_var: NODE_VER
48-
values: ["18", "20", "21"]
47+
values: ["22", "23"]
4948
- env_var: platform
5049
values: ["-rdkafka", "-debian-rdkafka"]
5150
commands:

CONTRIBUTING.md

+10-1
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ so if you feel something is missing feel free to send a pull request.
1313
* [Contributor Agreement](#contributor-agreement)
1414

1515
[How Can I Contribute?](#how-can-i-contribute)
16+
* [Setting up the repository](#setting-up-the-repository)
1617
* [Reporting Bugs](#reporting-bugs)
1718
* [Suggesting Enhancements](#suggesting-enhancements)
1819
* [Pull Requests](#pull-requests)
@@ -37,6 +38,14 @@ Not currently required.
3738

3839
## How can I contribute?
3940

41+
### Setting up the repository
42+
43+
To set up the library locally, do the following:
44+
45+
1) Clone this repository.
46+
2) Install librdkafka with `git submodule update --init --recursive`
47+
3) Install the dependencies `npm install`
48+
4049
### Reporting Bugs
4150

4251
Please use __Github Issues__ to report bugs. When filling out an issue report,
@@ -164,7 +173,7 @@ I began using Visual Studio code to develop on `node-rdkafka`. If you use it you
164173
],
165174
"defines": [],
166175
"macFrameworkPath": [
167-
"/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks"
176+
"/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks"
168177
],
169178
"compilerPath": "/usr/bin/clang",
170179
"cStandard": "c11",

Makefile

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ check: node_modules/.dirstamp
6464
@$(NODE) util/test-compile.js
6565

6666
e2e: $(E2E_TESTS)
67-
./node_modules/.bin/mocha --exit --timeout 120000 $(TEST_REPORTER) $(E2E_TESTS) $(TEST_OUTPUT)
67+
./node_modules/.bin/mocha --exit --bail --timeout 30000 $(TEST_REPORTER) $(E2E_TESTS) $(TEST_OUTPUT)
6868

6969
define release
7070
NEXT_VERSION=$(shell node -pe 'require("semver").inc("$(VERSION)", "$(1)")')

README.md

+7-6
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ I am looking for *your* help to make this project even better! If you're interes
1717

1818
The `node-rdkafka` library is a high-performance NodeJS client for [Apache Kafka](http://kafka.apache.org/) that wraps the native [librdkafka](https://github.com/edenhill/librdkafka) library. All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library.
1919

20-
__This library currently uses `librdkafka` version `2.3.0`.__
20+
__This library currently uses `librdkafka` version `2.6.1`.__
2121

2222
## Reference Docs
2323

@@ -34,7 +34,7 @@ Play nice; Play fair.
3434
## Requirements
3535

3636
* Apache Kafka >=0.9
37-
* Node.js >=4
37+
* Node.js >=16
3838
* Linux/Mac
3939
* Windows?! See below
4040
* OpenSSL
@@ -60,7 +60,7 @@ Using Alpine Linux? Check out the [docs](https://github.com/Blizzard/node-rdkafk
6060

6161
### Windows
6262

63-
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.3.0.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
63+
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.6.1.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
6464

6565
Requirements:
6666
* [node-gyp for Windows](https://github.com/nodejs/node-gyp#on-windows)
@@ -97,7 +97,7 @@ const Kafka = require('node-rdkafka');
9797

9898
## Configuration
9999

100-
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md)
100+
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.6.1/CONFIGURATION.md)
101101

102102
Configuration keys that have the suffix `_cb` are designated as callbacks. Some
103103
of these keys are informational and you can choose to opt-in (for example, `dr_cb`). Others are callbacks designed to
@@ -132,7 +132,7 @@ You can also get the version of `librdkafka`
132132
const Kafka = require('node-rdkafka');
133133
console.log(Kafka.librdkafkaVersion);
134134

135-
// #=> 2.3.0
135+
// #=> 2.6.1
136136
```
137137

138138
## Sending Messages
@@ -145,7 +145,7 @@ const producer = new Kafka.Producer({
145145
});
146146
```
147147

148-
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md) file described previously.
148+
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.6.1/CONFIGURATION.md) file described previously.
149149

150150
The following example illustrates a list with several `librdkafka` options set.
151151

@@ -511,6 +511,7 @@ The following table lists events for this API.
511511
|`data` | When using the Standard API consumed messages are emitted in this event. |
512512
|`partition.eof` | When using Standard API and the configuration option `enable.partition.eof` is set, `partition.eof` events are emitted in this event. The event contains `topic`, `partition` and `offset` properties. |
513513
|`warning` | The event is emitted in case of `UNKNOWN_TOPIC_OR_PART` or `TOPIC_AUTHORIZATION_FAILED` errors when consuming in *Flowing mode*. Since the consumer will continue working if the error is still happening, the warning event should reappear after the next metadata refresh. To control the metadata refresh rate set `topic.metadata.refresh.interval.ms` property. Once you resolve the error, you can manually call `getMetadata` to speed up consumer recovery. |
514+
|`rebalance` | The `rebalance` event is emitted when the consumer group is rebalanced. <br><br>This event is only emitted if the `rebalance_cb` configuration is set to a function or set to `true` |
514515
|`disconnected` | The `disconnected` event is emitted when the broker disconnects. <br><br>This event is only emitted when `.disconnect` is called. The wrapper will always try to reconnect otherwise. |
515516
|`ready` | The `ready` event is emitted when the `Consumer` is ready to read messages. |
516517
|`event` | The `event` event is emitted when `librdkafka` reports an event (if you opted in via the `event_cb` option).|

ci/build-and-publish.sh

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ set -ex
1414
if [ x"$CI" = x"true" ]; then
1515
cp ~/.env.aws-s3-credentials .env
1616
fi
17-
env IMAGE_TAG=${NODE_VER}${platform} UID=${UID} PNPM_STORE="$(pnpm config get store-dir)" docker-compose up -d
18-
env UID=${UID} docker-compose exec -u ${UID} tester pnpm i --frozen-lockfile --prefer-offline --ignore-scripts
17+
env IMAGE_TAG=${NODE_VER}${platform} UID=${UID} PNPM_STORE="$(dirname `pnpm store path`)" docker-compose up -d
18+
env UID=${UID} docker-compose exec -u ${UID} tester pnpm i --frozen-lockfile --offline --ignore-scripts
1919
env UID=${UID} docker-compose exec -u ${UID} tester pnpm binary:build
2020
env UID=${UID} docker-compose exec -u ${UID} tester pnpm binary:package
2121
env UID=${UID} docker-compose exec -u ${UID} tester pnpm binary:test

ci/semantic-release.sh

+5-5
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@ set -ex
44

55
if git log --oneline -n 1 | grep -v "chore(release)" > /dev/null; then
66
touch .env
7-
env UID=${UID} PNPM_STORE=$(pnpm config get store-dir) docker-compose --profile tests up -d
8-
docker-compose exec tester pnpm i
9-
docker-compose exec tester pnpm binary:build
10-
docker-compose exec tester pnpm test
11-
docker-compose exec tester pnpm test:e2e
7+
env UID=${UID} PNPM_STORE=$(dirname `pnpm store path`) docker-compose --profile tests up -d
8+
env UID=${UID} docker-compose exec -u ${UID} tester pnpm i --frozen-lockfile --offline --ignore-scripts
9+
env UID=${UID} docker-compose exec -u ${UID} tester pnpm binary:build
10+
env UID=${UID} docker-compose exec -u ${UID} tester pnpm test
11+
env UID=${UID} docker-compose exec -u ${UID} tester pnpm test:e2e
1212
pnpm semantic-release
1313
else
1414
echo "skipped commit: `git log --oneline -n 1`"

config.d.ts

+36-17
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
// ====== Generated from librdkafka 2.3.0 file CONFIGURATION.md ======
1+
// ====== Generated from librdkafka 2.6.1 file CONFIGURATION.md ======
22
// Code that generated this is a derivative work of the code from Nam Nguyen
33
// https://gist.github.com/ntgn81/066c2c8ec5b4238f85d1e9168a04e3fb
44

@@ -620,12 +620,33 @@ export interface GlobalConfig {
620620
"client.rack"?: string;
621621

622622
/**
623-
* Controls how the client uses DNS lookups. By default, when the lookup returns multiple IP addresses for a hostname, they will all be attempted for connection before the connection is considered failed. This applies to both bootstrap and advertised servers. If the value is set to `resolve_canonical_bootstrap_servers_only`, each entry will be resolved and expanded into a list of canonical names. NOTE: Default here is different from the Java client's default behavior, which connects only to the first IP address returned for a hostname.
623+
* The backoff time in milliseconds before retrying a protocol request, this is the first backoff time, and will be backed off exponentially until number of retries is exhausted, and it's capped by retry.backoff.max.ms.
624+
*
625+
* @default 100
626+
*/
627+
"retry.backoff.ms"?: number;
628+
629+
/**
630+
* The max backoff time in milliseconds before retrying a protocol request, this is the atmost backoff allowed for exponentially backed off requests.
631+
*
632+
* @default 1000
633+
*/
634+
"retry.backoff.max.ms"?: number;
635+
636+
/**
637+
* Controls how the client uses DNS lookups. By default, when the lookup returns multiple IP addresses for a hostname, they will all be attempted for connection before the connection is considered failed. This applies to both bootstrap and advertised servers. If the value is set to `resolve_canonical_bootstrap_servers_only`, each entry will be resolved and expanded into a list of canonical names. **WARNING**: `resolve_canonical_bootstrap_servers_only` must only be used with `GSSAPI` (Kerberos) as `sasl.mechanism`, as it's the only purpose of this configuration value. **NOTE**: Default here is different from the Java client's default behavior, which connects only to the first IP address returned for a hostname.
624638
*
625639
* @default use_all_dns_ips
626640
*/
627641
"client.dns.lookup"?: 'use_all_dns_ips' | 'resolve_canonical_bootstrap_servers_only';
628642

643+
/**
644+
* Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client
645+
*
646+
* @default true
647+
*/
648+
"enable.metrics.push"?: boolean;
649+
629650
/**
630651
* Enables or disables `event.*` emitting.
631652
*
@@ -703,20 +724,6 @@ export interface ProducerGlobalConfig extends GlobalConfig {
703724
*/
704725
"retries"?: number;
705726

706-
/**
707-
* The backoff time in milliseconds before retrying a protocol request, this is the first backoff time, and will be backed off exponentially until number of retries is exhausted, and it's capped by retry.backoff.max.ms.
708-
*
709-
* @default 100
710-
*/
711-
"retry.backoff.ms"?: number;
712-
713-
/**
714-
* The max backoff time in milliseconds before retrying a protocol request, this is the atmost backoff allowed for exponentially backed off requests.
715-
*
716-
* @default 1000
717-
*/
718-
"retry.backoff.max.ms"?: number;
719-
720727
/**
721728
* The threshold of outstanding not yet transmitted broker requests needed to backpressure the producer's message accumulator. If the number of not yet transmitted requests equals or exceeds this number, produce request creation that would have otherwise been triggered (for example, in accordance with linger.ms) will be delayed. A lower number yields larger and more effective batches. A higher value can improve latency when using compression on slow machines.
722729
*
@@ -810,12 +817,24 @@ export interface ConsumerGlobalConfig extends GlobalConfig {
810817
"heartbeat.interval.ms"?: number;
811818

812819
/**
813-
* Group protocol type. NOTE: Currently, the only supported group protocol type is `consumer`.
820+
* Group protocol type for the `classic` group protocol. NOTE: Currently, the only supported group protocol type is `consumer`.
814821
*
815822
* @default consumer
816823
*/
817824
"group.protocol.type"?: string;
818825

826+
/**
827+
* Group protocol to use. Use `classic` for the original protocol and `consumer` for the new protocol introduced in KIP-848. Available protocols: classic or consumer. Default is `classic`, but will change to `consumer` in next releases.
828+
*
829+
* @default classic
830+
*/
831+
"group.protocol"?: 'classic' | 'consumer';
832+
833+
/**
834+
* Server side assignor to use. Keep it null to make server select a suitable assignor for the group. Available assignors: uniform or range. Default is null
835+
*/
836+
"group.remote.assignor"?: string;
837+
819838
/**
820839
* How often to query for the current client group coordinator. If the currently assigned coordinator is down the configured query interval will be divided by ten to more quickly recover in case of coordinator reassignment.
821840
*

deps/librdkafka

docker-compose.yml

+5-6
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
version: '3'
2-
31
services:
42
zookeeper:
53
profiles: ["tests"]
@@ -27,13 +25,13 @@ services:
2725
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
2826
KAFKA_DEFAULT_REPLICATION_FACTOR: 1
2927
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
30-
KAFKA_NUM_NETWORK_THREADS: 4
31-
KAFKA_NUM_IO_THREADS: 4
32-
KAFKA_BACKGROUND_THREADS: 4
28+
KAFKA_NUM_NETWORK_THREADS: 2
29+
KAFKA_NUM_IO_THREADS: 2
30+
KAFKA_BACKGROUND_THREADS: 2
3331
KAFKA_HEAP_OPTS: "-Xmx512m -Xms512m"
3432

3533
tester:
36-
image: makeomatic/node:${IMAGE_TAG:-16-rdkafka-tester}
34+
image: makeomatic/node:${IMAGE_TAG:-22-rdkafka-tester}
3735
volumes:
3836
- ${PWD}:/src
3937
- ${PWD}/ci/npmrc:/usr/local/etc/npmrc:ro
@@ -43,6 +41,7 @@ services:
4341
env_file:
4442
- .env
4543
environment:
44+
- BUILD_LIBRDKAFKA=0
4645
- UV_THREADPOOL_SIZE=16
4746
- CI=true
4847
- UID=${UID:-1000}

0 commit comments

Comments
 (0)