Skip to content

Commit 0b1c833

Browse files
AchillejnjackinsSteve van Loben SelsArturneilcook
authored
0.4 (#438)
* add protocol package * add some documentation * fix * make ByteSequence more generic + add more benchmarks * WIP: add support for record batches * finish support for record batches * add support for recort set compression * backward-compatible compression codec imports * fix compress tests * make it possible for the transport to connect to multiple clusters + enhance kafka.Client to expose methods for creating and deleting topics * support responding to metadata requests with cached response * manage proper shutdown of client transport in tests * WIP: test Produce API * WIP: massive cleanup + track down CRC32 validation issue * functional Produce and Fetch implementations * add metadata request/response * add listoffsets API * expose cluster id and controller in metadata response * remove bufio.Writer from the protocol API * remove bufio.Reader from the protocol API * add back deprecated Client methods * fixes for kafka 0.10 * cleanup comment in protocol/record.go * add more comments * reduce size of bufio.Reader buffer on kafka connections * refactor transport internals to support splitting requests and dispatching them across multiple brokers * avoid contention on connection pool mutex in most cases * cleanup * add kafka.(*Client).MultiFetch API * close records in produce request * refactor record batch APIs to fully support streaming * remove io.Closer from protocol.RecordBatch * never return nil record batches * record batch fixes * remove unused variable * fix reading of multiple topic partitions in produce and fetch messages * alias compress.Compression in the kafka package * expose compression constants in the kafka package * exposes kafka.Request and kafka.Response interfaces * simplify the protocol.Bytes interface * simplify error management in protocol package * wait for topic creation to propagate + fix request dispatching in multi-broker clusters * simplify kafka.(*Client).CreateTopics API * improve error handling + wait for metadata propagation after topic creation * revisit connection pool implementation to remove multiplexing * fix panic when referencing truncated page buffer * fix unexpected EOF errors reading kafka messages * revisit record reader API * fix panic type asserting nil response into *metadata.Response * optimize allocation of broker ids in cluster metadata * unify sync.Pool usage * reduce memory footprint of protocol.(*RecordSet).readFromVersion2 * fix panic accessing optimized record reader with a nil headers slice * add APIs for marshaling and unmarshaling kafka values * [skip ci] fix README example * investigate-multi-fetch-issues * remove MultiFetch API * simplify protocol tests * add benchmarks for kafka.Marshal and kafka.Unmarshal * fix crash on cluster layout changes * add more error codes * remove partial support for flexible message format * downgrade metadata test from v9 to v8 * test against kafka 2.5.0 * Update offsetfetch.go Co-authored-by: Jeremy Jackins <[email protected]> * Update offsetfetch.go Co-authored-by: Jeremy Jackins <[email protected]> * Update offsetfetch.go Co-authored-by: Jeremy Jackins <[email protected]> * fix typos * fix more typos * set pprof labels on transport goroutines (#458) * change tests to run against 2.4.1 instead of 2.5.0 * support up to 2.3.1 (TestConn/nettest/PingPong fails with 2.4 and above) * Update README.md Co-authored-by: Steve van Loben Sels <[email protected]> * Update client.go Co-authored-by: Steve van Loben Sels <[email protected]> * comment on why we devide the timeout by 2 * protocol.Reducer => protocol.Merger * cleanup docker-compose.yml * protocol.Mapper => protocol.Splitter * propagate the caller's context to the dial function (#460) * fix backward compatiblity with kafka-go v0.3.x * fix record offsets when fetching messages with version 1 * default record timestamps to current timestamp * revert changes to docker-compose.yml * fix tests * fix tests (2) * 0.4: kafka.Writer (#461) * 0.4: kafka.Writer * update README * disable some parallel tests * disable global parallelism in tests * fix typo * disable parallelism in sub-packages tests * properly seed random sources + delete test topics * cleanup build * run all tests * fix tests * enable more SASL mechanisms on CI * try to fix the CI config * try testing the sasl package with 2.3.1 only * inline configuration for kafka 2.3.1 in CI * fix zookeeper hostname in CI * cleanup CI config * keep the kafka 0.10 configuration separate + test against more kafka versions * fix kafka 0.11 image tag * try caching dependencies * support multiple broker addresses * uncomment max attempt test * fix typos * guard against empty kafka.MultiAddr in kafka.Transport * don't export new APIs for network addresses + adapt to any multi-addr implementation * add comment about the transport caching the metadata responses * 0.4 fix tls address panic (#478) * 0.4: fix panic when TLS is enabled * 0.4: fix panic when establishing TLS connections * cleanup * Update transport_test.go Co-authored-by: Steve van Loben Sels <[email protected]> * validate that an error is returned Co-authored-by: Steve van Loben Sels <[email protected]> * 0.4: fix short writes (#479) * 0.4: modify protocol.Bytes to expose the number of remaining bytes instead of the full size of the sequence (#485) * modify protocol.Bytes to expose the number of remaining bytes instead of the full size of the sequence * add test for pageRef.ReadByte + fix pageRef.scan * reuse contiguousPages.scan * fix(writer): set correct balancer (#489) Sets the correct balancer as passed through in the config on the writer Co-authored-by: Steve van Loben Sels <[email protected]> Co-authored-by: Artur <[email protected]> * Fix for panic when RequiredAcks is set to RequireNone (#504) * Fix panic in async wait() method when RequiredAcks is None When RequiredAcks is None, the producer does not wait for a response from the broker, therefore the response is nil. The async wait() method was not handling this case, leading to a panic. * Add regression test for RequiredAcks == RequireNone This new test is required because all the other Writer tests use NewWriter() to create Writers, which sets RequiredAcks to RequireAll when 0 (None) was specified. * fix: writer test for RequiredAcks=None * fix: writer tests for RequiredAcks=None (2) * 0.4 broker resolver (#526) * 0.4: kafka.BrokerResolver * add kafka.Transport.Context * inline network and address fields in conn type * Fix sasl authentication on writer (#541) The authenticateSASL was called before getting api version. This resulted incorrect apiversion (0 instead of 1) when calling saslHandshakeRoundTrip request * Remove deprecated function (NewWriter) usages (#528) * fix zstd decoder leak (#543) * fix zstd decoder leak * fix tests * fix panic * fix tests (2) * fix tests (3) * fix tests (4) * move ConnWaitGroup to testing package * fix zstd codec * Update compress/zstd/zstd.go Co-authored-by: Nicholas Sun <[email protected]> * PR feedback Co-authored-by: Nicholas Sun <[email protected]> * improve custom resolver support by allowing port to be overridden (#545) * 0.4: reduce memory footprint (#547) * Bring over flexible message changes * Add docker-compose config for kafka 2.4.1 * Misc. cleanups * Add protocol tests and fix issues * Misc. fixes; run circleci on v2.4.1 * Skip conntest for v2.4.1 * Disable nettests for kafka 2.4.1 * Revert formatting changes * Misc. fixes * Update comments * Make create topics test more interesting * feat(writer): add support for writing messages to multiple topics (#561) * Add comments on failing nettests * Fix spacing * Update var int sizing * Simplify writeVarInt implementation * Revert encoding change * Simplify varint encoding functions and expand tests * Also test sizeOf functions in protocol test * chore: merge master and resolve conflicts (#570) Co-authored-by: Jeremy Jackins <[email protected]> Co-authored-by: Steve van Loben Sels <[email protected]> Co-authored-by: Artur <[email protected]> Co-authored-by: Neil Cook <[email protected]> Co-authored-by: Ahmy Yulrizka <[email protected]> Co-authored-by: Turfa Auliarachman <[email protected]> Co-authored-by: Nicholas Sun <[email protected]> Co-authored-by: Dominic Barnes <[email protected]> Co-authored-by: Benjamin Yolken <[email protected]> Co-authored-by: Benjamin Yolken <[email protected]>
1 parent c66d8ca commit 0b1c833

File tree

112 files changed

+13404
-1522
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

112 files changed

+13404
-1522
lines changed

.circleci/config.yml

+189-109
Original file line numberDiff line numberDiff line change
@@ -1,133 +1,213 @@
11
version: 2
22
jobs:
3+
# The kafka 0.10 tests are maintained as a separate configuration because
4+
# kafka only supported plain text SASL in this version.
35
kafka-010:
4-
working_directory: /go/src/github.com/segmentio/kafka-go
6+
working_directory: &working_directory /go/src/github.com/segmentio/kafka-go
57
environment:
68
KAFKA_VERSION: "0.10.1"
79
docker:
8-
- image: circleci/golang
9-
- image: wurstmeister/zookeeper
10-
ports: ['2181:2181']
11-
- image: wurstmeister/kafka:0.10.1.1
12-
ports: ['9092:9092']
13-
environment:
14-
KAFKA_BROKER_ID: '1'
15-
KAFKA_CREATE_TOPICS: 'test-writer-0:3:1,test-writer-1:3:1'
16-
KAFKA_DELETE_TOPIC_ENABLE: 'true'
17-
KAFKA_ADVERTISED_HOST_NAME: 'localhost'
18-
KAFKA_ADVERTISED_PORT: '9092'
19-
KAFKA_ZOOKEEPER_CONNECT: 'localhost:2181'
20-
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
21-
KAFKA_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
22-
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
23-
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN'
24-
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
25-
CUSTOM_INIT_SCRIPT: |-
26-
echo -e 'KafkaServer {\norg.apache.kafka.common.security.plain.PlainLoginModule required\n username="adminplain"\n password="admin-secret"\n user_adminplain="admin-secret";\n };' > /opt/kafka/config/kafka_server_jaas.conf;
27-
steps:
28-
- checkout
29-
- setup_remote_docker: { reusable: true, docker_layer_caching: true }
30-
- run: go get -v -t . ./gzip ./lz4 ./sasl ./snappy
31-
- run: go test -v -race -cover -timeout 150s . ./gzip ./lz4 ./sasl ./snappy
10+
- image: circleci/golang
11+
- image: wurstmeister/zookeeper
12+
ports:
13+
- 2181:2181
14+
- image: wurstmeister/kafka:0.10.1.1
15+
ports:
16+
- 9092:9092
17+
- 9093:9093
18+
environment:
19+
KAFKA_BROKER_ID: '1'
20+
KAFKA_CREATE_TOPICS: 'test-writer-0:3:1,test-writer-1:3:1'
21+
KAFKA_DELETE_TOPIC_ENABLE: 'true'
22+
KAFKA_ADVERTISED_HOST_NAME: 'localhost'
23+
KAFKA_ADVERTISED_PORT: '9092'
24+
KAFKA_ZOOKEEPER_CONNECT: 'localhost:2181'
25+
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
26+
KAFKA_MESSAGE_MAX_BYTES: '200000000'
27+
KAFKA_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
28+
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
29+
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN'
30+
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
31+
CUSTOM_INIT_SCRIPT: |-
32+
echo -e 'KafkaServer {\norg.apache.kafka.common.security.plain.PlainLoginModule required\n username="adminplain"\n password="admin-secret"\n user_adminplain="admin-secret";\n };' > /opt/kafka/config/kafka_server_jaas.conf;
33+
steps: &steps
34+
- checkout
35+
- restore_cache:
36+
key: kafka-go-mod-{{ checksum "go.sum" }}-1
37+
- run: go mod download
38+
- save_cache:
39+
key: kafka-go-mod-{{ checksum "go.sum" }}-1
40+
paths:
41+
- /go/pkg/mod
42+
- run: go test -race -cover ./...
3243

44+
# Starting at version 0.11, the kafka features and configuration remained
45+
# mostly stable, so we can use this CI job configuration as template for other
46+
# versions as well.
3347
kafka-011:
34-
working_directory: /go/src/github.com/segmentio/kafka-go
48+
working_directory: *working_directory
3549
environment:
3650
KAFKA_VERSION: "0.11.0"
3751
docker:
38-
- image: circleci/golang
39-
- image: wurstmeister/zookeeper
40-
ports: ['2181:2181']
41-
- image: wurstmeister/kafka:2.11-0.11.0.3
42-
ports: ['9092:9092','9093:9093']
43-
environment:
44-
KAFKA_BROKER_ID: '1'
45-
KAFKA_CREATE_TOPICS: 'test-writer-0:3:1,test-writer-1:3:1'
46-
KAFKA_DELETE_TOPIC_ENABLE: 'true'
47-
KAFKA_ADVERTISED_HOST_NAME: 'localhost'
48-
KAFKA_ADVERTISED_PORT: '9092'
49-
KAFKA_ZOOKEEPER_CONNECT: 'localhost:2181'
50-
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
51-
KAFKA_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
52-
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
53-
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN,SCRAM-SHA-256,SCRAM-SHA-512'
54-
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
55-
CUSTOM_INIT_SCRIPT: |-
56-
echo -e 'KafkaServer {\norg.apache.kafka.common.security.scram.ScramLoginModule required\n username="adminscram"\n password="admin-secret";\n org.apache.kafka.common.security.plain.PlainLoginModule required\n username="adminplain"\n password="admin-secret"\n user_adminplain="admin-secret";\n };' > /opt/kafka/config/kafka_server_jaas.conf;
57-
/opt/kafka/bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret-256],SCRAM-SHA-512=[password=admin-secret-512]' --entity-type users --entity-name adminscram
58-
steps:
59-
- checkout
60-
- setup_remote_docker: { reusable: true, docker_layer_caching: true }
61-
- run: go get -v -t . ./gzip ./lz4 ./sasl ./snappy
62-
- run: go test -v -race -cover -timeout 150s . ./gzip ./lz4 ./sasl ./snappy
52+
- image: circleci/golang
53+
- image: wurstmeister/zookeeper
54+
ports:
55+
- 2181:2181
56+
- image: wurstmeister/kafka:2.11-0.11.0.3
57+
ports:
58+
- 9092:9092
59+
- 9093:9093
60+
environment: &environment
61+
KAFKA_BROKER_ID: '1'
62+
KAFKA_CREATE_TOPICS: 'test-writer-0:3:1,test-writer-1:3:1'
63+
KAFKA_DELETE_TOPIC_ENABLE: 'true'
64+
KAFKA_ADVERTISED_HOST_NAME: 'localhost'
65+
KAFKA_ADVERTISED_PORT: '9092'
66+
KAFKA_ZOOKEEPER_CONNECT: 'localhost:2181'
67+
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
68+
KAFKA_MESSAGE_MAX_BYTES: '200000000'
69+
KAFKA_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
70+
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
71+
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN,SCRAM-SHA-256,SCRAM-SHA-512'
72+
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
73+
CUSTOM_INIT_SCRIPT: |-
74+
echo -e 'KafkaServer {\norg.apache.kafka.common.security.scram.ScramLoginModule required\n username="adminscram"\n password="admin-secret";\n org.apache.kafka.common.security.plain.PlainLoginModule required\n username="adminplain"\n password="admin-secret"\n user_adminplain="admin-secret";\n };' > /opt/kafka/config/kafka_server_jaas.conf;
75+
/opt/kafka/bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret-256],SCRAM-SHA-512=[password=admin-secret-512]' --entity-type users --entity-name adminscram
76+
steps: *steps
77+
78+
kafka-101:
79+
working_directory: *working_directory
80+
environment:
81+
KAFKA_VERSION: "1.0.1"
82+
docker:
83+
- image: circleci/golang
84+
- image: wurstmeister/zookeeper
85+
ports:
86+
- 2181:2181
87+
- image: wurstmeister/kafka:2.11-1.0.1
88+
ports:
89+
- 9092:9092
90+
- 9093:9093
91+
environment: *environment
92+
steps: *steps
6393

6494
kafka-111:
65-
working_directory: /go/src/github.com/segmentio/kafka-go
95+
working_directory: *working_directory
6696
environment:
6797
KAFKA_VERSION: "1.1.1"
6898
docker:
69-
- image: circleci/golang
70-
- image: wurstmeister/zookeeper
71-
ports: ['2181:2181']
72-
- image: wurstmeister/kafka:2.11-1.1.1
73-
ports: ['9092:9092','9093:9093']
74-
environment:
75-
KAFKA_BROKER_ID: '1'
76-
KAFKA_CREATE_TOPICS: 'test-writer-0:3:1,test-writer-1:3:1'
77-
KAFKA_DELETE_TOPIC_ENABLE: 'true'
78-
KAFKA_ADVERTISED_HOST_NAME: 'localhost'
79-
KAFKA_ADVERTISED_PORT: '9092'
80-
KAFKA_ZOOKEEPER_CONNECT: 'localhost:2181'
81-
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
82-
KAFKA_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
83-
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
84-
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN,SCRAM-SHA-256,SCRAM-SHA-512'
85-
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
86-
CUSTOM_INIT_SCRIPT: |-
87-
echo -e 'KafkaServer {\norg.apache.kafka.common.security.scram.ScramLoginModule required\n username="adminscram"\n password="admin-secret";\n org.apache.kafka.common.security.plain.PlainLoginModule required\n username="adminplain"\n password="admin-secret"\n user_adminplain="admin-secret";\n };' > /opt/kafka/config/kafka_server_jaas.conf;
88-
/opt/kafka/bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret-256],SCRAM-SHA-512=[password=admin-secret-512]' --entity-type users --entity-name adminscram
89-
steps:
90-
- checkout
91-
- setup_remote_docker: { reusable: true, docker_layer_caching: true }
92-
- run: go get -v -t . ./gzip ./lz4 ./sasl ./snappy
93-
- run: go test -v -race -cover -timeout 150s . ./gzip ./lz4 ./sasl ./snappy
99+
- image: circleci/golang
100+
- image: wurstmeister/zookeeper
101+
ports:
102+
- 2181:2181
103+
- image: wurstmeister/kafka:2.11-1.1.1
104+
ports:
105+
- 9092:9092
106+
- 9093:9093
107+
environment: *environment
108+
steps: *steps
109+
110+
kafka-201:
111+
working_directory: *working_directory
112+
environment:
113+
KAFKA_VERSION: "2.0.1"
114+
docker:
115+
- image: circleci/golang
116+
- image: wurstmeister/zookeeper
117+
ports:
118+
- 2181:2181
119+
- image: wurstmeister/kafka:2.12-2.0.1
120+
ports:
121+
- 9092:9092
122+
- 9093:9093
123+
environment: *environment
124+
steps: *steps
125+
126+
kafka-211:
127+
working_directory: *working_directory
128+
environment:
129+
KAFKA_VERSION: "2.1.1"
130+
docker:
131+
- image: circleci/golang
132+
- image: wurstmeister/zookeeper
133+
ports:
134+
- 2181:2181
135+
- image: wurstmeister/kafka:2.12-2.1.1
136+
ports:
137+
- 9092:9092
138+
- 9093:9093
139+
environment: *environment
140+
steps: *steps
141+
142+
kafka-222:
143+
working_directory: *working_directory
144+
environment:
145+
KAFKA_VERSION: "2.2.2"
146+
docker:
147+
- image: circleci/golang
148+
- image: wurstmeister/zookeeper
149+
ports:
150+
- 2181:2181
151+
- image: wurstmeister/kafka:2.12-2.2.2
152+
ports:
153+
- 9092:9092
154+
- 9093:9093
155+
environment: *environment
156+
steps: *steps
94157

95-
kafka-210:
96-
working_directory: /go/src/github.com/segmentio/kafka-go
158+
kafka-231:
159+
working_directory: *working_directory
97160
environment:
98-
KAFKA_VERSION: "2.1.0"
161+
KAFKA_VERSION: "2.3.1"
162+
docker:
163+
- image: circleci/golang
164+
- image: wurstmeister/zookeeper
165+
ports:
166+
- 2181:2181
167+
- image: wurstmeister/kafka:2.12-2.3.1
168+
ports:
169+
- 9092:9092
170+
- 9093:9093
171+
environment: *environment
172+
steps: *steps
173+
174+
kafka-241:
175+
working_directory: *working_directory
176+
environment:
177+
KAFKA_VERSION: "2.4.1"
178+
179+
# Need to skip nettest to avoid these kinds of errors:
180+
# --- FAIL: TestConn/nettest (17.56s)
181+
# --- FAIL: TestConn/nettest/PingPong (7.40s)
182+
# conntest.go:112: unexpected Read error: [7] Request Timed Out: the request exceeded the user-specified time limit in the request
183+
# conntest.go:118: mismatching value: got 77, want 78
184+
# conntest.go:118: mismatching value: got 78, want 79
185+
# ...
186+
#
187+
# TODO: Figure out why these are happening and fix them (they don't appear to be new).
188+
KAFKA_SKIP_NETTEST: "1"
99189
docker:
100-
- image: circleci/golang
101-
- image: wurstmeister/zookeeper
102-
ports: ['2181:2181']
103-
- image: wurstmeister/kafka:2.12-2.1.0
104-
ports: ['9092:9092','9093:9093']
105-
environment:
106-
KAFKA_BROKER_ID: '1'
107-
KAFKA_CREATE_TOPICS: 'test-writer-0:3:1,test-writer-1:3:1'
108-
KAFKA_DELETE_TOPIC_ENABLE: 'true'
109-
KAFKA_ADVERTISED_HOST_NAME: 'localhost'
110-
KAFKA_ADVERTISED_PORT: '9092'
111-
KAFKA_ZOOKEEPER_CONNECT: 'localhost:2181'
112-
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
113-
KAFKA_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
114-
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
115-
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256,SCRAM-SHA-512,PLAIN
116-
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
117-
CUSTOM_INIT_SCRIPT: |-
118-
echo -e 'KafkaServer {\norg.apache.kafka.common.security.scram.ScramLoginModule required\n username="adminscram"\n password="admin-secret";\n org.apache.kafka.common.security.plain.PlainLoginModule required\n username="adminplain"\n password="admin-secret"\n user_adminplain="admin-secret";\n };' > /opt/kafka/config/kafka_server_jaas.conf;
119-
/opt/kafka/bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret-256],SCRAM-SHA-512=[password=admin-secret-512]' --entity-type users --entity-name adminscram
120-
steps:
121-
- checkout
122-
- setup_remote_docker: { reusable: true, docker_layer_caching: true }
123-
- run: go get -v -t . ./gzip ./lz4 ./sasl ./snappy
124-
- run: go test -v -race -cover -timeout 150s $(go list ./... | grep -v examples)
190+
- image: circleci/golang
191+
- image: wurstmeister/zookeeper
192+
ports:
193+
- 2181:2181
194+
- image: wurstmeister/kafka:2.12-2.4.1
195+
ports:
196+
- 9092:9092
197+
- 9093:9093
198+
environment: *environment
199+
steps: *steps
125200

126201
workflows:
127202
version: 2
128203
run:
129204
jobs:
130-
- kafka-010
131-
- kafka-011
132-
- kafka-111
133-
- kafka-210
205+
- kafka-010
206+
- kafka-011
207+
- kafka-101
208+
- kafka-111
209+
- kafka-201
210+
- kafka-211
211+
- kafka-222
212+
- kafka-231
213+
- kafka-241

0 commit comments

Comments
 (0)