Skip to content

Upgrade elasticsearch-ruby client. #17161

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 13, 2025

Conversation

mashhurs
Copy link
Contributor

@mashhurs mashhurs commented Feb 27, 2025

Release notes

[rn:skip]

What does this PR do?

Upgrades es-ruby client, especially moves to new elastic-transport ruby client.
Also check the plugins which use older elasticsearch-transport client

Exhaustive CI run 🏃 https://buildkite.com/elastic/logstash-exhaustive-tests-pipeline/builds/1496

Why is it important/What is the impact to the user?

No user impact

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [ ] I have made corresponding change to the default configuration files (and/or docker env variables)
  • I have added tests that prove my fix is effective or that my feature works

Author's Checklist

How to test this PR locally

Related issues

Use cases

Screenshots

Logs

  • elasticsearch.conf config
input {
    elasticsearch {
        cloud_id => "my-cloud-id"
        api_key => "my-cloud:api-key"
    }
}

filter {
    elasticsearch {
        cloud_id => "my-cloud-id"
        api_key => "my-cloud:api-key"
        query => "type:start AND operation:%{[opid]}"
        fields => { "@timestamp" => "started"}
    }
}

output {
    stdout {
        codec => rubydebug
    }
}
  • Logs
➜  logstash git:(es-ruby-client-upgrade) ✗ bin/logstash -f config/elasticsearch.conf --enable-local-plugin-development --log.level=trace
Using system java: /Users/mashhur/.sdkman/candidates/java/current/bin/java
Sending Logstash logs to /Users/mashhur/Dev/elastic/logstash/logs which is now configured via log4j2.properties
[2025-03-03T23:26:21,145][INFO ][logstash.runner          ] Log4j configuration path used is: /Users/mashhur/Dev/elastic/logstash/config/log4j2.properties
[2025-03-03T23:26:21,147][WARN ][logstash.runner          ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2025-03-03T23:26:21,148][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"9.1.0", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.5+11-LTS on 21.0.5+11-LTS +indy +jit [arm64-darwin]"}
[2025-03-03T23:26:21,148][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2025-03-03T23:26:21,164][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000` (logstash default)
[2025-03-03T23:26:21,164][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000` (logstash default)
[2025-03-03T23:26:21,164][INFO ][org.logstash.jackson.StreamReadConstraintsUtil] Jackson default value override `logstash.jackson.stream-read-constraints.max-nesting-depth` configured to `1000` (logstash default)
[2025-03-03T23:26:21,168][DEBUG][logstash.runner          ] Setting global FieldReference escape style: none
[2025-03-03T23:26:21,174][DEBUG][logstash.runner          ] -------- Logstash Settings (* means modified) ---------
[2025-03-03T23:26:21,174][DEBUG][logstash.runner          ] allow_superuser: false
[2025-03-03T23:26:21,174][DEBUG][logstash.runner          ] node.name: Mashhurs-MacBook-Pro.local
[2025-03-03T23:26:21,174][DEBUG][logstash.runner          ] *path.config: config/input-elasticsearch.conf
[2025-03-03T23:26:21,174][DEBUG][logstash.runner          ] path.data: /Users/mashhur/Dev/elastic/logstash/data
[2025-03-03T23:26:21,174][DEBUG][logstash.runner          ] *config.string: null
[2025-03-03T23:26:21,174][DEBUG][logstash.runner          ] config.test_and_exit: false
[2025-03-03T23:26:21,174][DEBUG][logstash.runner          ] config.reload.automatic: false
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] config.reload.interval: TimeValue{duration=3, timeUnit=SECONDS}
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] config.support_escapes: false
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] config.field_reference.escape_style: none
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] metric.collect: true
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.id: main
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.system: false
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.workers: 10
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.batch.size: 125
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.batch.delay: 50
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.unsafe_shutdown: false
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.reloadable: true
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.plugin_classloaders: false
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.separate_logs: false
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.ordered: auto
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] pipeline.ecs_compatibility: v8
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] path.plugins: []
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] *interactive: null
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] config.debug: false
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] *log.level: trace (default: info)
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] version: false
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] help: false
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] *enable-local-plugin-development: true (default: false)
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] log.format: plain
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] log.format.json.fix_duplicate_message_fields: true
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] api.enabled: true
[2025-03-03T23:26:21,175][DEBUG][logstash.runner          ] api.http.host: 127.0.0.1
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.http.port: 9600..9700
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.environment: production
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.auth.type: none
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] *api.auth.basic.username: null
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] *api.auth.basic.password: null
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.auth.basic.password_policy.mode: WARN
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.auth.basic.password_policy.length.minimum: 8
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.auth.basic.password_policy.include.upper: REQUIRED
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.auth.basic.password_policy.include.lower: REQUIRED
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.auth.basic.password_policy.include.digit: REQUIRED
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.auth.basic.password_policy.include.symbol: OPTIONAL
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.ssl.enabled: false
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] *api.ssl.keystore.path: null
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] *api.ssl.keystore.password: null
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] api.ssl.supported_protocols: []
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] queue.type: memory
[2025-03-03T23:26:21,176][DEBUG][logstash.runner          ] queue.drain: false
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] queue.page_capacity: 67108864
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] queue.max_bytes: 1073741824
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] queue.max_events: 0
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] queue.checkpoint.acks: 1024
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] queue.checkpoint.writes: 1024
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] queue.checkpoint.interval: 1000
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] queue.checkpoint.retry: true
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] dead_letter_queue.enable: false
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] dead_letter_queue.max_bytes: 1073741824
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] dead_letter_queue.flush_interval: 5000
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] dead_letter_queue.storage_policy: drop_newer
[2025-03-03T23:26:21,179][DEBUG][logstash.runner          ] *dead_letter_queue.retain.age: null
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] slowlog.threshold.warn: TimeValue{duration=-1, timeUnit=NANOSECONDS}
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] slowlog.threshold.info: TimeValue{duration=-1, timeUnit=NANOSECONDS}
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] slowlog.threshold.debug: TimeValue{duration=-1, timeUnit=NANOSECONDS}
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] slowlog.threshold.trace: TimeValue{duration=-1, timeUnit=NANOSECONDS}
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] keystore.classname: org.logstash.secret.store.backend.JavaKeyStore
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] keystore.file: /Users/mashhur/Dev/elastic/logstash/config/logstash.keystore
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] *monitoring.cluster_uuid: null
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] pipeline.buffer.type: heap
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] path.queue: /Users/mashhur/Dev/elastic/logstash/data/queue
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] path.dead_letter_queue: /Users/mashhur/Dev/elastic/logstash/data/dead_letter_queue
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] path.settings: /Users/mashhur/Dev/elastic/logstash/config
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] path.logs: /Users/mashhur/Dev/elastic/logstash/logs
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] xpack.geoip.downloader.endpoint: https://geoip.elastic.co/v1/database
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] xpack.geoip.download.endpoint: https://geoip.elastic.co/v1/database
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] xpack.geoip.downloader.poll.interval: TimeValue{duration=24, timeUnit=HOURS}
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] xpack.geoip.downloader.enabled: true
[2025-03-03T23:26:21,180][DEBUG][logstash.runner          ] xpack.management.enabled: false
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] xpack.management.logstash.poll_interval: TimeValue{duration=5, timeUnit=SECONDS}
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] xpack.management.pipeline.id: ["main"]
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] xpack.management.elasticsearch.username: logstash_system
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.password: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] xpack.management.elasticsearch.hosts: ["https://localhost:9200"]
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.cloud_id: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.cloud_auth: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.api_key: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.proxy: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.ssl.certificate_authority: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.ssl.ca_trusted_fingerprint: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.ssl.truststore.path: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.ssl.truststore.password: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.ssl.keystore.path: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.ssl.keystore.password: null
[2025-03-03T23:26:21,181][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.ssl.certificate: null
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] *xpack.management.elasticsearch.ssl.key: null
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] xpack.management.elasticsearch.ssl.cipher_suites: []
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] xpack.management.elasticsearch.ssl.verification_mode: full
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] xpack.management.elasticsearch.sniffing: false
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] xpack.monitoring.allow_legacy_collection: false
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] xpack.monitoring.enabled: false
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.hosts: ["http://localhost:9200"]
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] xpack.monitoring.collection.interval: TimeValue{duration=10, timeUnit=SECONDS}
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] xpack.monitoring.collection.timeout_interval: TimeValue{duration=10, timeUnit=MINUTES}
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.username: logstash_system
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.password: null
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.proxy: null
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.cloud_id: null
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.cloud_auth: null
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.api_key: null
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.ssl.certificate_authority: null
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.ssl.ca_trusted_fingerprint: null
[2025-03-03T23:26:21,182][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.ssl.truststore.path: null
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.ssl.truststore.password: null
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.ssl.keystore.path: null
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.ssl.keystore.password: null
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.ssl.verification_mode: full
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.ssl.certificate: null
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] *xpack.monitoring.elasticsearch.ssl.key: null
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.ssl.cipher_suites: []
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] xpack.monitoring.elasticsearch.sniffing: false
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] xpack.monitoring.collection.pipeline.details.enabled: true
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] xpack.monitoring.collection.config.enabled: true
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] monitoring.enabled: false
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] monitoring.elasticsearch.hosts: ["http://localhost:9200"]
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] monitoring.collection.interval: TimeValue{duration=10, timeUnit=SECONDS}
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] monitoring.collection.timeout_interval: TimeValue{duration=10, timeUnit=MINUTES}
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] monitoring.elasticsearch.username: logstash_system
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] *monitoring.elasticsearch.password: null
[2025-03-03T23:26:21,184][DEBUG][logstash.runner          ] *monitoring.elasticsearch.proxy: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] *monitoring.elasticsearch.cloud_id: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] *monitoring.elasticsearch.cloud_auth: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] *monitoring.elasticsearch.api_key: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] *monitoring.elasticsearch.ssl.certificate_authority: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] *monitoring.elasticsearch.ssl.ca_trusted_fingerprint: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] *monitoring.elasticsearch.ssl.truststore.path: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] *monitoring.elasticsearch.ssl.truststore.password: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] *monitoring.elasticsearch.ssl.keystore.path: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] *monitoring.elasticsearch.ssl.keystore.password: null
[2025-03-03T23:26:21,185][DEBUG][logstash.runner          ] monitoring.elasticsearch.ssl.verification_mode: full
[2025-03-03T23:26:21,186][DEBUG][logstash.runner          ] *monitoring.elasticsearch.ssl.certificate: null
[2025-03-03T23:26:21,186][DEBUG][logstash.runner          ] *monitoring.elasticsearch.ssl.key: null
[2025-03-03T23:26:21,186][DEBUG][logstash.runner          ] monitoring.elasticsearch.ssl.cipher_suites: []
[2025-03-03T23:26:21,186][DEBUG][logstash.runner          ] monitoring.elasticsearch.sniffing: false
[2025-03-03T23:26:21,186][DEBUG][logstash.runner          ] monitoring.collection.pipeline.details.enabled: true
[2025-03-03T23:26:21,186][DEBUG][logstash.runner          ] monitoring.collection.config.enabled: true
[2025-03-03T23:26:21,186][DEBUG][logstash.runner          ] node.uuid: 
[2025-03-03T23:26:21,186][DEBUG][logstash.runner          ] --------------- Logstash Settings -------------------
[2025-03-03T23:26:21,186][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because command line options are specified
[2025-03-03T23:26:21,193][DEBUG][org.logstash.health.MultiIndicator] attached indicator pipelines=>MultiIndicator{indicators={}} (res:MultiIndicator{indicators={pipelines=MultiIndicator{indicators={}}}})
[2025-03-03T23:26:21,202][DEBUG][logstash.agent           ] Initializing API WebServer {"api.http.host"=>"127.0.0.1", "api.http.port"=>9600..9700, "api.ssl.enabled"=>false, "api.auth.type"=>"none", "api.environment"=>"production"}
[2025-03-03T23:26:21,206][DEBUG][logstash.api.service     ] [api-service] start
[2025-03-03T23:26:21,217][DEBUG][logstash.agent           ] Setting up metric collection
[2025-03-03T23:26:21,218][DEBUG][logstash.instrument.periodicpoller.os] Starting {:polling_interval=>5, :polling_timeout=>120}
[2025-03-03T23:26:21,218][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2025-03-03T23:26:21,226][DEBUG][logstash.instrument.periodicpoller.jvm] Starting {:polling_interval=>5, :polling_timeout=>120}
[2025-03-03T23:26:21,247][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Young Generation"}
[2025-03-03T23:26:21,248][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Concurrent GC"}
[2025-03-03T23:26:21,248][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"G1 Old Generation"}
[2025-03-03T23:26:21,250][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2025-03-03T23:26:21,251][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2025-03-03T23:26:21,251][DEBUG][logstash.instrument.periodicpoller.flowrate] Starting {:polling_interval=>5, :polling_timeout=>120}
[2025-03-03T23:26:21,257][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(input_throughput) baseline -> FlowCapture{nanoTimestamp=535961513103791 numerator=0.0 denominator=0.002349917}
[2025-03-03T23:26:21,260][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(filter_throughput) baseline -> FlowCapture{nanoTimestamp=535961516448083 numerator=0.0 denominator=0.005857584}
[2025-03-03T23:26:21,260][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(output_throughput) baseline -> FlowCapture{nanoTimestamp=535961516762000 numerator=0.0 denominator=0.006171959}
[2025-03-03T23:26:21,262][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(queue_backpressure) baseline -> FlowCapture{nanoTimestamp=535961518063166 numerator=0.0 denominator=7.473042}
[2025-03-03T23:26:21,262][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(worker_concurrency) baseline -> FlowCapture{nanoTimestamp=535961518269583 numerator=0.0 denominator=7.679709}
[2025-03-03T23:26:21,428][DEBUG][logstash.agent           ] Starting agent
[2025-03-03T23:26:21,429][DEBUG][logstash.agent           ] Starting API WebServer (puma)
[2025-03-03T23:26:21,431][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/Users/mashhur/Dev/elastic/logstash/config/aws-integration.conf", "/Users/mashhur/Dev/elastic/logstash/config/azure-event-hubs.conf", "/Users/mashhur/Dev/elastic/logstash/config/backpressure.conf", "/Users/mashhur/Dev/elastic/logstash/config/backpressure_test.conf", "/Users/mashhur/Dev/elastic/logstash/config/beats-and-tcp.conf", "/Users/mashhur/Dev/elastic/logstash/config/cert.pem", "/Users/mashhur/Dev/elastic/logstash/config/elastic_agent.conf", "/Users/mashhur/Dev/elastic/logstash/config/elastic_integration.conf", "/Users/mashhur/Dev/elastic/logstash/config/elastic_integration_serverless.conf", "/Users/mashhur/Dev/elastic/logstash/config/elastic_integration_simple.conf", "/Users/mashhur/Dev/elastic/logstash/config/elastic_integration_winlogbeat.conf", "/Users/mashhur/Dev/elastic/logstash/config/email.conf", "/Users/mashhur/Dev/elastic/logstash/config/encoding_test.conf", "/Users/mashhur/Dev/elastic/logstash/config/env-test.conf", "/Users/mashhur/Dev/elastic/logstash/config/env-var-test.conf", "/Users/mashhur/Dev/elastic/logstash/config/es-template", "/Users/mashhur/Dev/elastic/logstash/config/failure_injector-test.conf", "/Users/mashhur/Dev/elastic/logstash/config/file-input.conf", "/Users/mashhur/Dev/elastic/logstash/config/filter-decrypt.conf", "/Users/mashhur/Dev/elastic/logstash/config/filter-drop.conf", "/Users/mashhur/Dev/elastic/logstash/config/filter-mutate.conf", "/Users/mashhur/Dev/elastic/logstash/config/filter-ruby.conf", "/Users/mashhur/Dev/elastic/logstash/config/filter_dns.conf", "/Users/mashhur/Dev/elastic/logstash/config/filter_http.conf", "/Users/mashhur/Dev/elastic/logstash/config/filter_useragent.conf", "/Users/mashhur/Dev/elastic/logstash/config/fingerprint.conf", "/Users/mashhur/Dev/elastic/logstash/config/geoip", "/Users/mashhur/Dev/elastic/logstash/config/github.conf", "/Users/mashhur/Dev/elastic/logstash/config/grok", "/Users/mashhur/Dev/elastic/logstash/config/grpc-input.conf", "/Users/mashhur/Dev/elastic/logstash/config/hashid.conf", "/Users/mashhur/Dev/elastic/logstash/config/health_report", "/Users/mashhur/Dev/elastic/logstash/config/http-input.conf", "/Users/mashhur/Dev/elastic/logstash/config/http-output.conf", "/Users/mashhur/Dev/elastic/logstash/config/imap.conf", "/Users/mashhur/Dev/elastic/logstash/config/input-generator.config", "/Users/mashhur/Dev/elastic/logstash/config/input-heartbeat.conf", "/Users/mashhur/Dev/elastic/logstash/config/input-tcp-and-http.conf", "/Users/mashhur/Dev/elastic/logstash/config/input-tcp.conf", "/Users/mashhur/Dev/elastic/logstash/config/input_jdbc_mssql.conf", "/Users/mashhur/Dev/elastic/logstash/config/input_jdbc_mssql_non_schedule.conf", "/Users/mashhur/Dev/elastic/logstash/config/input_jdbc_mysql.conf", "/Users/mashhur/Dev/elastic/logstash/config/input_jdbc_oracle.conf", "/Users/mashhur/Dev/elastic/logstash/config/input_jdbc_oracle1.conf", "/Users/mashhur/Dev/elastic/logstash/config/input_jdbc_oracle2.conf", "/Users/mashhur/Dev/elastic/logstash/config/input_stdin.conf", "/Users/mashhur/Dev/elastic/logstash/config/inputbeats.conf", "/Users/mashhur/Dev/elastic/logstash/config/jvm.options", "/Users/mashhur/Dev/elastic/logstash/config/kafkain.conf", "/Users/mashhur/Dev/elastic/logstash/config/kafkaout.conf", "/Users/mashhur/Dev/elastic/logstash/config/kinesis.conf", "/Users/mashhur/Dev/elastic/logstash/config/legacy-template.json", "/Users/mashhur/Dev/elastic/logstash/config/log4j2.properties", "/Users/mashhur/Dev/elastic/logstash/config/logstash-input.conf", "/Users/mashhur/Dev/elastic/logstash/config/logstash-output-stdin.conf", "/Users/mashhur/Dev/elastic/logstash/config/logstash-output.conf", "/Users/mashhur/Dev/elastic/logstash/config/logstash-sample.conf", "/Users/mashhur/Dev/elastic/logstash/config/logstash.keystore", "/Users/mashhur/Dev/elastic/logstash/config/logstash.yml", "/Users/mashhur/Dev/elastic/logstash/config/ls-to-ls", "/Users/mashhur/Dev/elastic/logstash/config/memcached-filter.conf", "/Users/mashhur/Dev/elastic/logstash/config/mssql-jdbc-11.2.0.jre17.jar", "/Users/mashhur/Dev/elastic/logstash/config/mssql-jdbc-12.2.0.jre11.jar", "/Users/mashhur/Dev/elastic/logstash/config/mysql-connector-j-8.0.33.jar", "/Users/mashhur/Dev/elastic/logstash/config/output-csv.conf", "/Users/mashhur/Dev/elastic/logstash/config/output-fluent.conf", "/Users/mashhur/Dev/elastic/logstash/config/pipelines.yml", "/Users/mashhur/Dev/elastic/logstash/config/rackspace.conf", "/Users/mashhur/Dev/elastic/logstash/config/rag", "/Users/mashhur/Dev/elastic/logstash/config/redis.conf", "/Users/mashhur/Dev/elastic/logstash/config/s3input.conf", "/Users/mashhur/Dev/elastic/logstash/config/s3output-sdh-1296.conf", "/Users/mashhur/Dev/elastic/logstash/config/s3output.conf", "/Users/mashhur/Dev/elastic/logstash/config/salesforce.conf", "/Users/mashhur/Dev/elastic/logstash/config/sdh", "/Users/mashhur/Dev/elastic/logstash/config/simple-es-v7.conf", "/Users/mashhur/Dev/elastic/logstash/config/simple.conf", "/Users/mashhur/Dev/elastic/logstash/config/snmp_input.conf", "/Users/mashhur/Dev/elastic/logstash/config/snmp_trap.conf", "/Users/mashhur/Dev/elastic/logstash/config/startup.options", "/Users/mashhur/Dev/elastic/logstash/config/stdin.conf", "/Users/mashhur/Dev/elastic/logstash/config/udp_input.conf"]}
[2025-03-03T23:26:21,432][DEBUG][logstash.agent           ] Trying to start API WebServer {:port=>9600, :ssl_enabled=>false}
[2025-03-03T23:26:21,444][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/Users/mashhur/Dev/elastic/logstash/config/input-elasticsearch.conf"}
[2025-03-03T23:26:21,450][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2025-03-03T23:26:21,451][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main}
[2025-03-03T23:26:21,452][DEBUG][org.logstash.health.MultiIndicator] detached indicator main<=null (res:MultiIndicator{indicators={}})
[2025-03-03T23:26:21,452][DEBUG][org.logstash.health.HealthObserver] detached pipeline indicator [main]
[2025-03-03T23:26:21,453][DEBUG][org.logstash.health.MultiIndicator] attached indicator main=>ProbeIndicator{observer=org.logstash.health.PipelineIndicator$$Lambda/0x0000007001c7f458@ed10936, probes={flow:worker_utilization=org.logstash.health.PipelineIndicator$FlowWorkerUtilizationProbe@17939c1, status=org.logstash.health.PipelineIndicator$StatusProbe@128a1e5}} (res:MultiIndicator{indicators={main=ProbeIndicator{observer=org.logstash.health.PipelineIndicator$$Lambda/0x0000007001c7f458@ed10936, probes={flow:worker_utilization=org.logstash.health.PipelineIndicator$FlowWorkerUtilizationProbe@17939c1, status=org.logstash.health.PipelineIndicator$StatusProbe@128a1e5}}}})
[2025-03-03T23:26:21,453][DEBUG][org.logstash.health.HealthObserver] attached pipeline indicator [main]
[2025-03-03T23:26:21,455][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to exists or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore
[2025-03-03T23:26:21,456][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to load or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore
[2025-03-03T23:26:21,480][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2025-03-03T23:26:21,485][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] retrieved secret urn:logstash:secret:v1:keystore.seed
[2025-03-03T23:26:21,485][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] Using existing keystore at /Users/mashhur/Dev/elastic/logstash/config/logstash.keystore
[2025-03-03T23:26:21,622][INFO ][org.reflections.Reflections] Reflections took 55 ms to scan 1 urls, producing 149 keys and 521 values
[2025-03-03T23:26:21,631][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to exists or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore
[2025-03-03T23:26:21,631][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to load or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore
[2025-03-03T23:26:21,641][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] retrieved secret urn:logstash:secret:v1:keystore.seed
[2025-03-03T23:26:21,641][DEBUG][org.logstash.secret.store.backend.JavaKeyStore] Using existing keystore at /Users/mashhur/Dev/elastic/logstash/config/logstash.keystore
[2025-03-03T23:26:22,461][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"elasticsearch", :type=>"input", :class=>LogStash::Inputs::Elasticsearch}
[2025-03-03T23:26:22,468][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"plain", :type=>"codec", :class=>LogStash::Codecs::Plain}
[2025-03-03T23:26:22,472][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@id = "plain_5c94b72b-df88-46d7-867f-ac9cf03ec85f"
[2025-03-03T23:26:22,472][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@enable_metric = true
[2025-03-03T23:26:22,472][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@charset = "UTF-8"
[2025-03-03T23:26:22,476][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@api_key = <password>
[2025-03-03T23:26:22,476][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@cloud_id = "initial-817-cluster:dXMtd2VzdC0yLmF3cy5mb3VuZC5pbzo0NDMkZjNkMjliMGM0NTUyNDk1NDlmN2NiMWE4NzZmMjQyZDgkMDFiNzM4NGFjNDY1NDdhYTljYjgxYjE4MGViZDMzOTI="
[2025-03-03T23:26:22,476][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@id = "edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479"
[2025-03-03T23:26:22,476][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@enable_metric = true
[2025-03-03T23:26:22,476][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@codec = <LogStash::Codecs::Plain id=>"plain_5c94b72b-df88-46d7-867f-ac9cf03ec85f", enable_metric=>true, charset=>"UTF-8">
[2025-03-03T23:26:22,476][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@add_field = {}
[2025-03-03T23:26:22,476][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@index = "logstash-*"
[2025-03-03T23:26:22,476][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@query = "{ \"sort\": [ \"_doc\" ] }"
[2025-03-03T23:26:22,476][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@response_type = "hits"
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@size = 1000
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@retries = 0
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@search_api = "auto"
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@scroll = "1m"
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@docinfo = false
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@docinfo_fields = ["_index", "_type", "_id"]
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@custom_headers = {}
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@connect_timeout_seconds = 10
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@request_timeout_seconds = 60
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@socket_timeout_seconds = 60
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@ssl_supported_protocols = []
[2025-03-03T23:26:22,477][DEBUG][logstash.inputs.elasticsearch] config LogStash::Inputs::Elasticsearch/@ssl_verification_mode = "full"
[2025-03-03T23:26:22,485][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"elasticsearch", :type=>"filter", :class=>LogStash::Filters::Elasticsearch}
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@cloud_id = "initial-817-cluster:dXMtd2VzdC0yLmF3cy5mb3VuZC5pbzo0NDMkZjNkMjliMGM0NTUyNDk1NDlmN2NiMWE4NzZmMjQyZDgkMDFiNzM4NGFjNDY1NDdhYTljYjgxYjE4MGViZDMzOTI="
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@id = "e5c9e70312257dcd89442c55f9257847fea680ccdeb3141449325e93d13a600f"
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@fields = {"@timestamp"=>"started"}
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@api_key = <password>
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@query = "type:start AND operation:%{[opid]}"
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@enable_metric = true
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@add_tag = []
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@remove_tag = []
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@add_field = {}
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@remove_field = []
[2025-03-03T23:26:22,494][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@periodic_flush = false
[2025-03-03T23:26:22,495][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@hosts = ["localhost:9200"]
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@index = ""
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@sort = "@timestamp:desc"
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@custom_headers = {}
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@docinfo_fields = {}
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@aggregation_fields = {}
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@ssl_supported_protocols = []
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@ssl_verification_mode = "full"
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@enable_sort = true
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@result_size = 1
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@tag_on_failure = ["_elasticsearch_lookup_failure"]
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@retry_on_failure = 0
[2025-03-03T23:26:22,496][DEBUG][logstash.filters.elasticsearch] config LogStash::Filters::Elasticsearch/@retry_on_status = [500, 502, 503, 504]
[2025-03-03T23:26:22,498][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"stdout", :type=>"output", :class=>LogStash::Outputs::Stdout}
[2025-03-03T23:26:22,503][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"rubydebug", :type=>"codec", :class=>LogStash::Codecs::RubyDebug}
[2025-03-03T23:26:22,508][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@id = "rubydebug_cbcceab4-76b4-4fd8-ae48-4d62d4ca80a8"
[2025-03-03T23:26:22,508][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@enable_metric = true
[2025-03-03T23:26:22,508][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@metadata = false
[2025-03-03T23:26:22,548][DEBUG][logstash.outputs.stdout  ] config LogStash::Outputs::Stdout/@codec = <LogStash::Codecs::RubyDebug id=>"rubydebug_cbcceab4-76b4-4fd8-ae48-4d62d4ca80a8", enable_metric=>true, metadata=>false>
[2025-03-03T23:26:22,548][DEBUG][logstash.outputs.stdout  ] config LogStash::Outputs::Stdout/@id = "b81336e386371df23c4d073056ae47dfc482053186a96c3cac332bb5c3511586"
[2025-03-03T23:26:22,549][DEBUG][logstash.outputs.stdout  ] config LogStash::Outputs::Stdout/@enable_metric = true
[2025-03-03T23:26:22,549][DEBUG][logstash.outputs.stdout  ] config LogStash::Outputs::Stdout/@workers = 1
[2025-03-03T23:26:22,554][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2025-03-03T23:26:22,555][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(input_throughput) baseline -> FlowCapture{nanoTimestamp=535962811478791 numerator=0.0 denominator=0.000131959}
[2025-03-03T23:26:22,556][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `input_throughput` in namespace `[:stats, :pipelines, :main, :flow]`
[2025-03-03T23:26:22,556][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(filter_throughput) baseline -> FlowCapture{nanoTimestamp=535962812182791 numerator=0.0 denominator=0.000836625}
[2025-03-03T23:26:22,556][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `filter_throughput` in namespace `[:stats, :pipelines, :main, :flow]`
[2025-03-03T23:26:22,556][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(output_throughput) baseline -> FlowCapture{nanoTimestamp=535962812405625 numerator=0.0 denominator=0.001059667}
[2025-03-03T23:26:22,556][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `output_throughput` in namespace `[:stats, :pipelines, :main, :flow]`
[2025-03-03T23:26:22,556][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(queue_backpressure) baseline -> FlowCapture{nanoTimestamp=535962812666041 numerator=0.0 denominator=1.320084}
[2025-03-03T23:26:22,557][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `queue_backpressure` in namespace `[:stats, :pipelines, :main, :flow]`
[2025-03-03T23:26:22,557][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(worker_concurrency) baseline -> FlowCapture{nanoTimestamp=535962813139333 numerator=0.0 denominator=1.793375}
[2025-03-03T23:26:22,557][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `worker_concurrency` in namespace `[:stats, :pipelines, :main, :flow]`
[2025-03-03T23:26:22,557][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(worker_utilization) baseline -> FlowCapture{nanoTimestamp=535962813502083 numerator=0.0 denominator=21.55584}
[2025-03-03T23:26:22,557][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `worker_utilization` in namespace `[:stats, :pipelines, :main, :flow]`
[2025-03-03T23:26:22,558][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(throughput) baseline -> FlowCapture{nanoTimestamp=535962814092291 numerator=0.0 denominator=0.002746417}
[2025-03-03T23:26:22,558][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `throughput` in namespace `[:stats, :pipelines, :main, :plugins, :inputs, :edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479, :flow]`
[2025-03-03T23:26:22,558][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(worker_millis_per_event) baseline -> FlowCapture{nanoTimestamp=535962814633666 numerator=0.0 denominator=0.0}
[2025-03-03T23:26:22,558][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(worker_utilization) baseline -> FlowCapture{nanoTimestamp=535962814727416 numerator=0.0 denominator=33.8125}
[2025-03-03T23:26:22,559][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `worker_millis_per_event` in namespace `[:stats, :pipelines, :main, :plugins, :filters, :e5c9e70312257dcd89442c55f9257847fea680ccdeb3141449325e93d13a600f, :flow]`
[2025-03-03T23:26:22,559][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `worker_utilization` in namespace `[:stats, :pipelines, :main, :plugins, :filters, :e5c9e70312257dcd89442c55f9257847fea680ccdeb3141449325e93d13a600f, :flow]`
[2025-03-03T23:26:22,559][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(worker_millis_per_event) baseline -> FlowCapture{nanoTimestamp=535962815095708 numerator=0.0 denominator=0.0}
[2025-03-03T23:26:22,559][TRACE][org.logstash.instrument.metrics.BaseFlowMetric] FlowMetric(worker_utilization) baseline -> FlowCapture{nanoTimestamp=535962815182125 numerator=0.0 denominator=38.36}
[2025-03-03T23:26:22,559][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `worker_millis_per_event` in namespace `[:stats, :pipelines, :main, :plugins, :outputs, :b81336e386371df23c4d073056ae47dfc482053186a96c3cac332bb5c3511586, :flow]`
[2025-03-03T23:26:22,559][DEBUG][org.logstash.execution.AbstractPipelineExt] Flow metric registered: `worker_utilization` in namespace `[:stats, :pipelines, :main, :plugins, :outputs, :b81336e386371df23c4d073056ae47dfc482053186a96c3cac332bb5c3511586, :flow]`
[2025-03-03T23:26:22,559][DEBUG][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main"}
[2025-03-03T23:26:22,572][INFO ][logstash.filters.elasticsearch][main] New ElasticSearch filter client {:hosts=>["https://f3d29b0c455249549f7cb1a876f242d8.us-west-2.aws.found.io:443"]}
[2025-03-03T23:26:22,819][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>10, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1250, "pipeline.sources"=>["/Users/mashhur/Dev/elastic/logstash/config/input-elasticsearch.conf"], :thread=>"#<Thread:0x2e7382c5 /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[2025-03-03T23:26:23,092][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.27}
[2025-03-03T23:26:23,181][INFO ][logstash.inputs.elasticsearch][main] `search_api => auto` resolved to `search_after` {:elasticsearch=>"8.17.0"}
[2025-03-03T23:26:23,182][INFO ][logstash.inputs.elasticsearch][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2025-03-03T23:26:23,182][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2025-03-03T23:26:23,183][INFO ][logstash.inputs.elasticsearch.searchafter][main][edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479] Create point in time (PIT)
[2025-03-03T23:26:23,184][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2025-03-03T23:26:23,192][DEBUG][logstash.javapipeline    ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2e7382c5 /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 sleep>"}
[2025-03-03T23:26:23,195][TRACE][logstash.agent           ] Converge results {:success=>true, :failed_actions=>[], :successful_actions=>["id: main, action_type: LogStash::PipelineAction::Create"]}
[2025-03-03T23:26:23,196][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2025-03-03T23:26:23,196][DEBUG][logstash.pipelineresourceusagevalidator] For a baseline of 2KB events, the maximum heap memory consumed across 1 pipelines may reach up to 0.24% of the entire heap (more if the events are bigger).
[2025-03-03T23:26:23,205][INFO ][logstash.inputs.elasticsearch.searchafter][main][edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479] Query start
[2025-03-03T23:26:23,205][DEBUG][logstash.inputs.elasticsearch.searchafter][main][edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479] Query progress
[2025-03-03T23:26:23,205][TRACE][logstash.inputs.elasticsearch.searchafter][main][edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479] search options {:size=>1000, :body=>{"sort"=>["_doc"], :pit=>{:id=>"yvaYBAAA", :keep_alive=>"1m"}}}
[2025-03-03T23:26:23,229][INFO ][logstash.inputs.elasticsearch.searchafter][main][edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479] Query completed
[2025-03-03T23:26:23,230][INFO ][logstash.inputs.elasticsearch.searchafter][main][edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479] Closing point in time (PIT)
[2025-03-03T23:26:23,250][DEBUG][logstash.inputs.elasticsearch][main][edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479] Closing {:plugin=>"LogStash::Inputs::Elasticsearch"}
[2025-03-03T23:26:23,251][DEBUG][logstash.pluginmetadata  ][main][edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479] Removing metadata for plugin edb6c95865b83ed2b5bb5044b86eb84cb3e9c7f6965b42978bf19d1016a6f479
[2025-03-03T23:26:23,251][DEBUG][logstash.javapipeline    ][main] Input plugins stopped! Will shutdown filter/output workers. {:pipeline_id=>"main", :thread=>"#<Thread:0x2e7382c5 /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[2025-03-03T23:26:23,252][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x74cdc2f1 /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 run>"}
[2025-03-03T23:26:23,312][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x353cd31e /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 run>"}
[2025-03-03T23:26:23,361][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x15dbc6a0 /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 dead>"}
[2025-03-03T23:26:23,361][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x529564bf /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 dead>"}
[2025-03-03T23:26:23,361][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x4cc16598 /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 dead>"}
[2025-03-03T23:26:23,361][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x2361275b /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 dead>"}
[2025-03-03T23:26:23,362][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x41e2ba0c /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 dead>"}
[2025-03-03T23:26:23,372][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x71a05cab /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 dead>"}
[2025-03-03T23:26:23,372][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x64d8d194 /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 dead>"}
[2025-03-03T23:26:23,372][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<LogStash::WorkerLoopThread:0x43709b06 /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:304 dead>"}
[2025-03-03T23:26:23,372][DEBUG][logstash.filters.elasticsearch][main] Closing {:plugin=>"LogStash::Filters::Elasticsearch"}
[2025-03-03T23:26:23,372][DEBUG][logstash.pluginmetadata  ][main] Removing metadata for plugin e5c9e70312257dcd89442c55f9257847fea680ccdeb3141449325e93d13a600f
[2025-03-03T23:26:23,372][DEBUG][logstash.outputs.stdout  ][main] Closing {:plugin=>"LogStash::Outputs::Stdout"}
[2025-03-03T23:26:23,372][DEBUG][logstash.pluginmetadata  ][main] Removing metadata for plugin b81336e386371df23c4d073056ae47dfc482053186a96c3cac332bb5c3511586
[2025-03-03T23:26:23,372][DEBUG][logstash.javapipeline    ][main] Pipeline has been shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x2e7382c5 /Users/mashhur/Dev/elastic/logstash/logstash-core/lib/logstash/java_pipeline.rb:138 run>"}
[2025-03-03T23:26:23,373][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2025-03-03T23:26:23,706][DEBUG][logstash.agent           ] Shutting down all pipelines {:pipelines_count=>0}
[2025-03-03T23:26:23,707][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2025-03-03T23:26:23,708][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Delete/pipeline_id:main}
[2025-03-03T23:26:23,709][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2025-03-03T23:26:23,709][DEBUG][org.logstash.health.MultiIndicator] detached indicator main<=null (res:MultiIndicator{indicators={}})
[2025-03-03T23:26:23,709][DEBUG][org.logstash.health.HealthObserver] detached pipeline indicator [main]
[2025-03-03T23:26:23,709][TRACE][logstash.agent           ] Converge results {:success=>true, :failed_actions=>[], :successful_actions=>["id: main, action_type: LogStash::PipelineAction::Delete"]}
[2025-03-03T23:26:23,710][DEBUG][logstash.instrument.periodicpoller.os] Stopping
[2025-03-03T23:26:23,710][DEBUG][logstash.instrument.periodicpoller.jvm] Stopping
[2025-03-03T23:26:23,710][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Stopping
[2025-03-03T23:26:23,710][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Stopping
[2025-03-03T23:26:23,710][DEBUG][logstash.instrument.periodicpoller.flowrate] Stopping
[2025-03-03T23:26:23,713][DEBUG][logstash.agent           ] API WebServer has stopped running
[2025-03-03T23:26:23,713][INFO ][logstash.runner          ] Logstash shut down.
➜  logstash git:(es-ruby-client-upgrade) ✗ 


Copy link

mergify bot commented Mar 3, 2025

This pull request does not have a backport label. Could you fix it @mashhurs? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-8./d is the label to automatically backport to the 8./d branch. /d is the digit.
  • backport-8.x is the label to automatically backport to the 8.x branch.

Copy link

mergify bot commented Mar 3, 2025

backport-8.x has been added to help with the transition to the new branch 8.x.
If you don't need it please use backport-skip label.

@mergify mergify bot added the backport-8.x Automated backport to the 8.x branch with mergify label Mar 3, 2025
Copy link

Quality Gate passed Quality Gate passed

Issues
0 New issues
0 Fixed issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
No data about Duplication

See analysis details on SonarQube

@elasticmachine
Copy link
Collaborator

💛 Build succeeded, but was flaky

Failed CI Steps

@mashhurs mashhurs changed the title [WIP] Upgrade elasticsearch-ruby client. Upgrade elasticsearch-ruby client. Mar 4, 2025
@mashhurs mashhurs marked this pull request as ready for review March 4, 2025 07:36
@mashhurs mashhurs linked an issue Mar 4, 2025 that may be closed by this pull request
@mashhurs mashhurs added backport-8.17 Automated backport with mergify backport-9.0 Automated backport to the 9.0 branch with mergify backport-8.16 Automated backport with mergify backport-8.18 Automated backport with mergify labels Mar 4, 2025
@donoghuc
Copy link
Member

donoghuc commented Mar 5, 2025

This looks correct, but it seems like it will depend on releases of the elasticsearch filter/input plugins. Looks like there will be multiple streams of those will need to be released given the backport labels here (corresponding to LS 8 and 9 respecitively).

@mashhurs
Copy link
Contributor Author

mashhurs commented Mar 6, 2025

This looks correct, but it seems like it will depend on releases of the elasticsearch filter/input plugins. Looks like there will be multiple streams of those will need to be released given the backport labels here (corresponding to LS 8 and 9 respectively).

Right! The limitation would be the LS version with change (let's say 9.1) will not tolerate the older input/filter-elasticsearch which doesn't have a elastic-transport gem support.

      def get_transport_client_class
        require "elasticsearch/transport/transport/http/manticore"
        ::Elasticsearch::Transport::Transport::HTTP::Manticore
      rescue ::LoadError
        require "elastic/transport/transport/http/manticore"
        ::Elastic::Transport::Transport::HTTP::Manticore
      end

However, we will be able to release 8.x (assuming this change will be backported to 8.x LS core) and 9.x compatible plugin versions. Which means any LS core version (with 7.x and 8.x elasticsearch transport gems) can load the plugins. And If there will other side effect issues with this core change, plugins are released independently, which gives us flexibility to move forward.

@mashhurs mashhurs requested a review from jsvd March 6, 2025 19:30
Copy link
Member

@jsvd jsvd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mashhurs mashhurs removed the backport-8.16 Automated backport with mergify label Mar 13, 2025
@mashhurs mashhurs merged commit e748488 into elastic:main Mar 13, 2025
9 checks passed
mergify bot pushed a commit that referenced this pull request Mar 13, 2025
* Upgrade elasticsearch-ruby client.

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)
mergify bot pushed a commit that referenced this pull request Mar 13, 2025
* Upgrade elasticsearch-ruby client.

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)
mergify bot pushed a commit that referenced this pull request Mar 13, 2025
* Upgrade elasticsearch-ruby client.

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)
mergify bot pushed a commit that referenced this pull request Mar 13, 2025
* Upgrade elasticsearch-ruby client.

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)
@mashhurs mashhurs mentioned this pull request Mar 13, 2025
5 tasks
mashhurs added a commit that referenced this pull request Mar 17, 2025
* Upgrade elasticsearch-ruby client. (#17161)

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)

* Update elasticsearch-ruby client in gemfile lock.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>
mashhurs added a commit that referenced this pull request Mar 17, 2025
* Upgrade elasticsearch-ruby client. (#17161)

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)

* Apply the required changes in elasticsearch_client.rb after upgrading the elasticsearch-ruby client to 8.x

* Swallow the exception and make non-connectable client when ES client raises connection refuses exception.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>
mergify bot added a commit that referenced this pull request Mar 17, 2025
* Upgrade elasticsearch-ruby client. (#17161)

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)

* Apply the required changes in elasticsearch_client.rb after upgrading the elasticsearch-ruby client to 8.x

* Swallow the exception and make non-connectable client when ES client raises connection refuses exception.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>
(cherry picked from commit 7f74ce3)
mergify bot added a commit that referenced this pull request Mar 17, 2025
* Upgrade elasticsearch-ruby client. (#17161)

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)

* Apply the required changes in elasticsearch_client.rb after upgrading the elasticsearch-ruby client to 8.x

* Swallow the exception and make non-connectable client when ES client raises connection refuses exception.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>
(cherry picked from commit 7f74ce3)
mashhurs added a commit that referenced this pull request Mar 17, 2025
…17306) (#17339)

* [8.x] Upgrade elasticsearch-ruby client. (backport #17161) (#17306)

* Upgrade elasticsearch-ruby client. (#17161)

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)

* Apply the required changes in elasticsearch_client.rb after upgrading the elasticsearch-ruby client to 8.x

* Swallow the exception and make non-connectable client when ES client raises connection refuses exception.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>
(cherry picked from commit 7f74ce3)

* Update Gemfile lock to reflect elasticsearch-ruby changes.

* Upgrade faraday to v2 in Gemfile lock.

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: Mashhur <[email protected]>
mashhurs added a commit that referenced this pull request Mar 17, 2025
…17306) (#17340)

* [8.x] Upgrade elasticsearch-ruby client. (backport #17161) (#17306)

* Upgrade elasticsearch-ruby client. (#17161)

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)

* Apply the required changes in elasticsearch_client.rb after upgrading the elasticsearch-ruby client to 8.x

* Swallow the exception and make non-connectable client when ES client raises connection refuses exception.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>
(cherry picked from commit 7f74ce3)

* Update Gemfile lock after updagrading elasticsearch-ruby client.

* Update faraday to v2 in Gemfile lock.

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: Mashhur <[email protected]>
donoghuc added a commit that referenced this pull request Mar 21, 2025
…7385)

* Fix empty node stats pipelines (#17185) (#17197)

Fixed an issue where the `/_node/stats` API displayed empty pipeline metrics
when X-Pack monitoring was enabled

(cherry picked from commit 8678581)

Co-authored-by: kaisecheng <[email protected]>

* Update z_rubycheck.rake to no longer inject Xmx1g (#17211)

This allows the environment variable JRUBY_OPTS to be used for setting properties like Xmx
original pr: #16420

(cherry picked from commit f562f37)

Co-authored-by: kaisecheng <[email protected]>

* Improve warning for insufficient file resources for PQ max_bytes (#16656) (#17222)

This commit refactors the `PersistedQueueConfigValidator` class to provide a
more detailed, accurate and actionable warning when pipeline's PQ configs are at
risk of running out of disk space. See
#14839 for design considerations. The
highlights of the changes include accurately determining the free resources on a
filesystem disk and then providing a breakdown of the usage for each of the
paths configured for a queue.

(cherry picked from commit 0621544)

Co-authored-by: Cas Donoghue <[email protected]>

* gradle task migrate to the new artifacts-api (#17232) (#17236)

This commit migrates gradle task to the new artifacts-api

- remove dependency on staging artifacts
- all builds use snapshot artifacts
- resolve version from current branch, major.x, previous minor,
   with priority given in that order.

Co-authored-by: Andrea Selva <[email protected]>
(cherry picked from commit 0a74568)

Co-authored-by: kaisecheng <[email protected]>

* tests: ls2ls delay checking until events have been processed (#17167) (#17252)

* tests: ls2ls delay checking until events have been processed

* Make sure upstream sends expected number of events before checking the expectation with downstream. Remove unnecessary or duplicated logics from the spec.

* Add exception handling in `wait_for_rest_api` to make wait for LS REST API retriable.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>
(cherry picked from commit 73ffa24)

Co-authored-by: Ry Biesemeyer <[email protected]>

* Additional cleanify changes to ls2ls integ tests (#17246) (#17255)

* Additional cleanify changes to ls2ls integ tests: replace heartbeat-input with reload option, set queue drain to get consistent result.

(cherry picked from commit 1e06eea)

Co-authored-by: Mashhur <[email protected]>

* [8.x] Reimplement LogStash::Numeric setting in Java (backport #17127) (#17273)

This is an automatic backport of pull request #17127 done by [Mergify](https://mergify.com).

----

* Reimplement LogStash::Numeric setting in Java (#17127)

Reimplements `LogStash::Setting::Numeric` Ruby setting class into the `org.logstash.settings.NumericSetting` and exposes it through `java_import` as `LogStash::Setting::NumericSetting`.
Updates the rspec tests:
- verifies `java.lang.IllegalArgumentException` instead of `ArgumentError` is thrown because the kind of exception thrown by Java code, during verification.

(cherry picked from commit 07a3c8e)

* Fixed reference of SettingNumeric class (on main modules were removed)

---------

Co-authored-by: Andrea Selva <[email protected]>

* [CI] Health report integration tests use the new artifacts-api (#17274) (#17277)

migrate to the new artifacts-api

(cherry picked from commit feb2b92)

Co-authored-by: kaisecheng <[email protected]>

* Backport 17203 and 17267 8.x (#17270)

* Pluginmanager clean after mutate (#17203)

* pluginmanager: always clean after mutate

* pluginmanager: don't skip updating plugins installed with --version

* pr feedback

(cherry picked from commit 8c96913)

* Pluginmanager install preserve (#17267)

* tests: integration tests for pluginmanager install --preserve

* fix regression where pluginmanager's install --preserve flag didn't

* [Backport 8.x] benchmark script (#17283)

This commit cherry-picked the missing becnhmark script PRs
The deprecated artifacts-api is removed

[CI] benchmark uses the new artifacts-api (#17224)
[CI] benchmark readme (#16783)
Introduce a new flag to explicitly permit legacy monitoring (#16586) (Only take the benchmark script)
[ci] fix wrong queue type in benchmark marathon (#16465)
[CI] fix benchmark marathon (#16447)
[CI] benchmark dashboard and pipeline for testing against multiple versions (#16421)

* Fix pqcheck and pqrepair on Windows (#17210) (#17259)

A recent change to pqheck, attempted to address an issue where the
pqcheck would not on Windows mahcines when located in a folder containing
a space, such as "C:\program files\elastic\logstash". While this fixed an
issue with spaces in folders, it introduced a new issue related to Java options,
and the pqcheck was still unable to run on Windows.

This PR attempts to address the issue, by removing the quotes around the Java options,
which caused the option parsing to fail, and instead removes the explicit setting of
the classpath - the use of `set CLASSPATH=` in the `:concat` function is sufficient
to set the classpath, and should also fix the spaces issue

Fixes: #17209
(cherry picked from commit ba5f215)

Co-authored-by: Rob Bavey <[email protected]>

* Shareable function for partitioning integration tests (#17223) (#17303)

For the fedramp high work https://github.com/elastic/logstash/pull/17038/files a
use case for multiple scripts consuming the partitioning functionality emerged.
As we look to more advanced partitioning we want to ensure that the
functionality will be consumable from multiple scripts.

See #17219 (comment)

(cherry picked from commit d916972)

Co-authored-by: Cas Donoghue <[email protected]>

* [8.x] Surface failures from nested rake/shell tasks (backport #17310) (#17317)

* Surface failures from nested rake/shell tasks (#17310)

Previously when rake would shell out the output would be lost. This
made debugging CI logs difficult. This commit updates the stack with
improved message surfacing on error.

(cherry picked from commit 0d931a5)

# Conflicts:
#	rubyUtils.gradle

* Extend ruby linting tasks to handle file inputs (#16660)

This commit extends the gradle and rake tasks to pass through a list of files
for rubocop to lint. This allows more specificity and fine grained control for
linting when the consumer of the tasks only wishes to lint a select few files.

* Ensure shellwords library is loaded

Without this depending on task load order `Shellwords` may not be available.

---------

Co-authored-by: Cas Donoghue <[email protected]>

* Forward Port of Release notes for `8.16.5` and `8.17.3` (#17187), (#17188) (#17266) (#17321)

* Forward Port of Release notes for 8.17.3 (#17187)

* Update release notes for 8.17.3

---------

Co-authored-by: logstashmachine <[email protected]>
Co-authored-by: Rob Bavey <[email protected]>

* Forward Port of Release notes for 8.16.5 (#17188)

* Update release notes for 8.16.5

---------

Co-authored-by: logstashmachine <[email protected]>
Co-authored-by: Rob Bavey <[email protected]>

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: logstashmachine <[email protected]>
(cherry picked from commit 63e8fd1)

Co-authored-by: Rob Bavey <[email protected]>

* Add Deprecation tag to arcsight module (#17331)

* [8.x] Upgrade elasticsearch-ruby client. (backport #17161) (#17306)

* Upgrade elasticsearch-ruby client. (#17161)

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)

* Apply the required changes in elasticsearch_client.rb after upgrading the elasticsearch-ruby client to 8.x

* Swallow the exception and make non-connectable client when ES client raises connection refuses exception.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>

* Removed unused configHash computation that can be replaced by PipelineConfig.configHash() (#17336) (#17345)

Removed unused configHash computation happening in AbstractPipeline and used only in tests replaced by PipelineConfig.configHash() invocation

(cherry picked from commit 787fd2c)

Co-authored-by: Andrea Selva <[email protected]>

* Use org.logstash.common.Util to hashing by default to SHA256 (#17346) (#17352)

Removes the usage fo Apache Commons Codec MessgeDigest to use internal Util class with embodies hashing methods.

(cherry picked from commit 9c0e50f)

Co-authored-by: Andrea Selva <[email protected]>

* Added test to verify the int overflow happen (#17353) (#17354)

Use long instead of int type to keep the length of the first token.

The size limit validation requires to sum two integers, one with the length of the accumulated chars till now plus the next fragment head part. If any of the two sizes is close to the max integer it generates an overflow and could successfully fail the test https://github.com/elastic/logstash/blob/9c0e50faacc4700da3dc84a3ba729b84bff860a8/logstash-core/src/main/java/org/logstash/common/BufferedTokenizerExt.java#L123.

To fall in this case it's required that sizeLimit is bigger then 2^32 bytes (2GB) and data fragments without any line delimiter is pushed to the tokenizer with a total size close to 2^32 bytes.

(cherry picked from commit afde43f)

Co-authored-by: Andrea Selva <[email protected]>

* [8.x] add ci shared qualified-version script (backport #17311) (#17348)

* add ci shared qualified-version script (#17311)

* ci: add shareable script for generating qualified version

* ci: use shared script to generate qualified version

(cherry picked from commit 10b5a84)

# Conflicts:
#	.buildkite/scripts/dra/build_docker.sh

* resolve merge conflict

---------

Co-authored-by: Rye Biesemeyer <[email protected]>

* tests: make integration split quantity configurable (#17219) (#17367)

* tests: make integration split quantity configurable

Refactors shared splitter bash function to take a list of files on stdin
and split into a configurable number of partitions, emitting only those from
the currently-selected partition to stdout.

Also refactors the only caller in the integration_tests launcher script to
accept an optional partition_count parameter (defaulting to `2` for backward-
compatibility), to provide the list of specs to the function's stdin, and to
output relevant information about the quantity of partition splits and which
was selected.

* ci: run integration tests in 3 parts

(cherry picked from commit 3e0f488)

Co-authored-by: Rye Biesemeyer <[email protected]>

* Update buildkite with new patterns from 8.x

This commit updates the buildkite definitions to be compatible with the
upstream 8.x branch. Specificially:
 - Split integration tests for fips into 3 runners.
 - Use the new shared bash helper for computing QUALIFIED_VERSION

It also continues standardization of using a "fedrampHighMode" for indicating
the tests should be running in the context of our custom image for the SRE team.

* Bug fix: Actually use shared integration_tests.sh file

After refactoring to use the same script, I forgot to actually use it
in the buildkite definition...

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: kaisecheng <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <[email protected]>
Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Andrea Selva <[email protected]>
Co-authored-by: Rob Bavey <[email protected]>
Co-authored-by: Mashhur <[email protected]>
donoghuc added a commit that referenced this pull request Apr 10, 2025
…17541)

* Provision automatic test runs for ruby/java unit tests and integration tests with fips mode (#17029)

* Run ruby unit tests under FIPS mode

This commit shows a proposed pattern for running automated tests for logstash in
FIPS mode. It uses a new identifier in gradle for conditionally setting
properties to configure fips mode. The tests are run in a container
representative of the base image the final artifacts will be built from.

* Move everything from qa/fips -> x-pack

This commit moves test setup/config under x-pack dir.

* Extend test pipelines for fips mode to java unit tests and integration

* Add git to container for gradle

* move fips-mode gradle hooks to x-pack

* Skip license check for now

---------

Co-authored-by: Ry Biesemeyer <[email protected]>

* Split fips integration tests into two steps (#17038)

* Split fips integration tests into two steps

The integration tests suite takes about 40 minutes. This is far too slow for
reasonable feedback on a PR. This commit follows the pattern for the non-fips
integration tests whereby the tests are split into two sections that can run in
parallel across two steps. This should halve the feedback time.

The logic for getting a list of specs files to run has been extracted to a
shared shell script for use here and in the integration tests shell script.

* Use shared function for splitting integration tests

The logic for getting a list of specs to run has been extracted so that it can
be shared across fips and non fips integration test modes. This commit updates
the non fips integration tests to use the shared function.

* fix typo in helper name (kebab case, not snake)

* Escape $ so buildkite upload does not try to interpolate

* Wrap integration tests in shell script to avoid BK interpolation

* Move entrypoint for running integration tests inside docker

* Skip offline pack manager tests when running in fips mode (#17160)

This commit introduces a pattern for skipping tests we do not want to run in
fips mode. In this case the plugin manager tests rely on using
bundler/net-http/openssl which is not configured to be run with bouncycastle
fips providers.

* Get tests running in FIPS environment (#17096)

* Modify FIPS test runner environment for integration tests

This commit makes two small changes to the dockerfile used to define the fips
test environment. Specifically it adds curl (which is required by integration
tests), make (which is required by test setup), adds a c compiler (gcc and glibc
for integration tests which compile a small c program) and turns off debug ssl
logging as it is extremely noisy in logs and breaking some assumptions in
tests about logfile content.

Closes elastic/ingest-dev#5074

* Do not run test env as root

The elastic stack is not meant to be run as root. This commit updates the test
environment to provision a non root user and have the container context execute
under that providioned user.

Closes elastic/ingest-dev#5088

* Skip unit tests that reach out to rubygems for fips mode

The `update` test setup reaches out to rubygems with net/http which is
incompatible with our use of openssl in fips mode. This commit skips those tests
when running under fips.

See elastic/ingest-dev#5071

* Work around random data request limits in BCFIPS

This commit changes test setup to make chunked calls to random data generation
in order to work around a limit in fips mode.

See elastic/ingest-dev#5072 for details.

* Skip tests validating openssl defaults

Openssl will not be used when running under FIPS mode. The test setup and tests
themselves were failing when running in FIPS mode. This commit skips the tests
that are covering behavior that will be disabled.

See elastic/ingest-dev#5069

* Skip tests that require pluginmanager to install plugins

This commit skips tests that rely on using the pluginmanager to install plugins
during tests which require reaching out to rubygems.

See elastic/ingest-dev#5108

* Skip prepare offline pack integration tests in fips mode

The offline pack tests require on pluginmanager to use net-http library for
resolving deps. This will not operate under fips mode. Skip when running in fips
mode.

See elastic/ingest-dev#5109

* Ensure a gem executible is on path for test setup

This commit modifies the generate-gems script to ensure that a `gem` executable
is on the path. If there is not one on the test runner, then use the one bundled
with vendored jruby.

* Skip webserver specs when running in FIPS mode

This commit skips the existing webserver tests. We have some options and need to
understand some requirements for the webserver functionality for fips mode. The
 elastic/ingest-dev#5110 issue has a ton of details.

* Skip cli `remove` integration tests for FIPS

This commit skips tests that are running `remove` action for the pluginmanager.
These require reaching out to rubygems which is not available in FIPS mode.
These tests were added post initial integration tests scoping work but are
clearly requiring skips for FIPS mode.

* Add openssl package to FIPS testing env container

The setup script for filebeats requires an openssl executable. This commit
updates the testing container with this tool.

See elastic/ingest-dev#5107

* Re-introduce retries for FIPS tests now that we are in a passing state

* Backport 17203 and 17267 fedramp8x (#17271)

* Pluginmanager clean after mutate (#17203)

* pluginmanager: always clean after mutate

* pluginmanager: don't skip updating plugins installed with --version

* pr feedback

(cherry picked from commit 8c96913)

* Pluginmanager install preserve (#17267)

* tests: integration tests for pluginmanager install --preserve

* fix regression where pluginmanager's install --preserve flag didn't

* Add :skip_fips to update_spec.rb

* Run x-pack tests under FIPS mode (#17254)

This commit adds two new CI cells to cover x-pack tests running in FIPS mode.
This ensures we have coverage of these features when running existing x-pack
tests.

* observabilitySRE: docker rake tasks (#17272)

* observabilitySRE: docker rake tasks

* Apply suggestions from code review

Co-authored-by: Cas Donoghue <[email protected]>

* Update rakelib/plugin.rake

* Update rakelib/plugin.rake

* Update docker/Makefile

Co-authored-by: Cas Donoghue <[email protected]>

---------

Co-authored-by: Cas Donoghue <[email protected]>

* Ensure env2yaml dep is properly expressed in observabilitySRE task (#17305)

The `build-from-local-observability-sre-artifacts` task depends on the `env2yaml`
task. This was easy to miss in local development if other images had been built.
This commit updates the makefile to properly define that dependency.

* Add a smoke test for observability SRE container (#17298)

* Add a smoke test for observability SRE container

Add a CI cell to ensure the observability contater is building successfully. In
order to show success run a quick smoke test to point out any glaring issues.

This adds some general, low risk plugins for doing quick testing. This will help
developers in debugging as we work on this image.

* Show what is happening when rake fails

* Debug deeper in the stack

Show the stdout/stderr when shelling out fails.

* Debug layers of build tooling

Open3 is not capturing stdout for some reason. Capture it and print to see what is wrong in CI.

* Actually run ls command in docker container 🤦

* Update safe_system based on code review suggestion

* Dynamically generate version for container invocation

Co-authored-by: Ry Biesemeyer <[email protected]>

* Refactor smoke test setup to script

Avoid interpolation backflips with buildkite by extracting to a script.

* Split out message surfacing improvment to separate PR.

Moved to: #17310

* Extract version qualifier into standalone script

* Wait for version-qualifier.sh script to land upstream

Use  #17311 once it lands and gets
backported to 8.x. For now just hard code version.

---------

Co-authored-by: Ry Biesemeyer <[email protected]>

* Configure observability SRE container for FIPS (#17297)

This commit establishes a pattern for configuring the container to run in fips mode.

- Use chainguard-fips
- Copy over java properties from ls tar archive
- Convert default jks to BC keystore
- Configure logstash to use java properties and FIPS config

NOTE: this assumes bouncycastle jars are in the tarball. The
elastic/ingest-dev#5049 ticket will address that.

* Exclude plugin manager and keystore cli from observabilitySRE artifact (#17375)

* Conditionally install bcfips jars when building/testing observabilitySRE (#17359)

* Conditionally install bcfips jars when building for observabilitySRE

This commit implements a pattern for performing specific gradle tasks based on a
newly named "fedrampHighMode" option. This option is used to configure tests to
run with additional configuration specific to the observabilitySRE use case.
Similarly the additional jar dependencies for bouncycastle fips providers are
conditionally installed gated on the "fedrampHighMode" option.

In order to ensure the the "fedrampHighMode" option persists through the layers
of sub-processes spawned between gradle and rake we store and respect an
environment variable FEDRAMP_HIGH_MODE. This may be useful generally in building
the docker image.

Try codereview suggestion

* Use gradle pattern for setting properties with env vars

Gradle has a mechanism for setting properties with environment variables
prefixed with `ORG_GRADLE_PROJECT`. This commit updates the gradle tasks to use
that pattern.

See
https://docs.gradle.org/current/userguide/build_environment.html#setting_a_project_property
for details.

* Pull in latests commits from 8.x and update based on new patterns (#17385)

* Fix empty node stats pipelines (#17185) (#17197)

Fixed an issue where the `/_node/stats` API displayed empty pipeline metrics
when X-Pack monitoring was enabled

(cherry picked from commit 8678581)

Co-authored-by: kaisecheng <[email protected]>

* Update z_rubycheck.rake to no longer inject Xmx1g (#17211)

This allows the environment variable JRUBY_OPTS to be used for setting properties like Xmx
original pr: #16420

(cherry picked from commit f562f37)

Co-authored-by: kaisecheng <[email protected]>

* Improve warning for insufficient file resources for PQ max_bytes (#16656) (#17222)

This commit refactors the `PersistedQueueConfigValidator` class to provide a
more detailed, accurate and actionable warning when pipeline's PQ configs are at
risk of running out of disk space. See
#14839 for design considerations. The
highlights of the changes include accurately determining the free resources on a
filesystem disk and then providing a breakdown of the usage for each of the
paths configured for a queue.

(cherry picked from commit 0621544)

Co-authored-by: Cas Donoghue <[email protected]>

* gradle task migrate to the new artifacts-api (#17232) (#17236)

This commit migrates gradle task to the new artifacts-api

- remove dependency on staging artifacts
- all builds use snapshot artifacts
- resolve version from current branch, major.x, previous minor,
   with priority given in that order.

Co-authored-by: Andrea Selva <[email protected]>
(cherry picked from commit 0a74568)

Co-authored-by: kaisecheng <[email protected]>

* tests: ls2ls delay checking until events have been processed (#17167) (#17252)

* tests: ls2ls delay checking until events have been processed

* Make sure upstream sends expected number of events before checking the expectation with downstream. Remove unnecessary or duplicated logics from the spec.

* Add exception handling in `wait_for_rest_api` to make wait for LS REST API retriable.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>
(cherry picked from commit 73ffa24)

Co-authored-by: Ry Biesemeyer <[email protected]>

* Additional cleanify changes to ls2ls integ tests (#17246) (#17255)

* Additional cleanify changes to ls2ls integ tests: replace heartbeat-input with reload option, set queue drain to get consistent result.

(cherry picked from commit 1e06eea)

Co-authored-by: Mashhur <[email protected]>

* [8.x] Reimplement LogStash::Numeric setting in Java (backport #17127) (#17273)

This is an automatic backport of pull request #17127 done by [Mergify](https://mergify.com).

----

* Reimplement LogStash::Numeric setting in Java (#17127)

Reimplements `LogStash::Setting::Numeric` Ruby setting class into the `org.logstash.settings.NumericSetting` and exposes it through `java_import` as `LogStash::Setting::NumericSetting`.
Updates the rspec tests:
- verifies `java.lang.IllegalArgumentException` instead of `ArgumentError` is thrown because the kind of exception thrown by Java code, during verification.

(cherry picked from commit 07a3c8e)

* Fixed reference of SettingNumeric class (on main modules were removed)

---------

Co-authored-by: Andrea Selva <[email protected]>

* [CI] Health report integration tests use the new artifacts-api (#17274) (#17277)

migrate to the new artifacts-api

(cherry picked from commit feb2b92)

Co-authored-by: kaisecheng <[email protected]>

* Backport 17203 and 17267 8.x (#17270)

* Pluginmanager clean after mutate (#17203)

* pluginmanager: always clean after mutate

* pluginmanager: don't skip updating plugins installed with --version

* pr feedback

(cherry picked from commit 8c96913)

* Pluginmanager install preserve (#17267)

* tests: integration tests for pluginmanager install --preserve

* fix regression where pluginmanager's install --preserve flag didn't

* [Backport 8.x] benchmark script (#17283)

This commit cherry-picked the missing becnhmark script PRs
The deprecated artifacts-api is removed

[CI] benchmark uses the new artifacts-api (#17224)
[CI] benchmark readme (#16783)
Introduce a new flag to explicitly permit legacy monitoring (#16586) (Only take the benchmark script)
[ci] fix wrong queue type in benchmark marathon (#16465)
[CI] fix benchmark marathon (#16447)
[CI] benchmark dashboard and pipeline for testing against multiple versions (#16421)

* Fix pqcheck and pqrepair on Windows (#17210) (#17259)

A recent change to pqheck, attempted to address an issue where the
pqcheck would not on Windows mahcines when located in a folder containing
a space, such as "C:\program files\elastic\logstash". While this fixed an
issue with spaces in folders, it introduced a new issue related to Java options,
and the pqcheck was still unable to run on Windows.

This PR attempts to address the issue, by removing the quotes around the Java options,
which caused the option parsing to fail, and instead removes the explicit setting of
the classpath - the use of `set CLASSPATH=` in the `:concat` function is sufficient
to set the classpath, and should also fix the spaces issue

Fixes: #17209
(cherry picked from commit ba5f215)

Co-authored-by: Rob Bavey <[email protected]>

* Shareable function for partitioning integration tests (#17223) (#17303)

For the fedramp high work https://github.com/elastic/logstash/pull/17038/files a
use case for multiple scripts consuming the partitioning functionality emerged.
As we look to more advanced partitioning we want to ensure that the
functionality will be consumable from multiple scripts.

See #17219 (comment)

(cherry picked from commit d916972)

Co-authored-by: Cas Donoghue <[email protected]>

* [8.x] Surface failures from nested rake/shell tasks (backport #17310) (#17317)

* Surface failures from nested rake/shell tasks (#17310)

Previously when rake would shell out the output would be lost. This
made debugging CI logs difficult. This commit updates the stack with
improved message surfacing on error.

(cherry picked from commit 0d931a5)

# Conflicts:
#	rubyUtils.gradle

* Extend ruby linting tasks to handle file inputs (#16660)

This commit extends the gradle and rake tasks to pass through a list of files
for rubocop to lint. This allows more specificity and fine grained control for
linting when the consumer of the tasks only wishes to lint a select few files.

* Ensure shellwords library is loaded

Without this depending on task load order `Shellwords` may not be available.

---------

Co-authored-by: Cas Donoghue <[email protected]>

* Forward Port of Release notes for `8.16.5` and `8.17.3` (#17187), (#17188) (#17266) (#17321)

* Forward Port of Release notes for 8.17.3 (#17187)

* Update release notes for 8.17.3

---------

Co-authored-by: logstashmachine <[email protected]>
Co-authored-by: Rob Bavey <[email protected]>

* Forward Port of Release notes for 8.16.5 (#17188)

* Update release notes for 8.16.5

---------

Co-authored-by: logstashmachine <[email protected]>
Co-authored-by: Rob Bavey <[email protected]>

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: logstashmachine <[email protected]>
(cherry picked from commit 63e8fd1)

Co-authored-by: Rob Bavey <[email protected]>

* Add Deprecation tag to arcsight module (#17331)

* [8.x] Upgrade elasticsearch-ruby client. (backport #17161) (#17306)

* Upgrade elasticsearch-ruby client. (#17161)

* Fix Faraday removed basic auth option and apply the ES client module name change.

(cherry picked from commit e748488)

* Apply the required changes in elasticsearch_client.rb after upgrading the elasticsearch-ruby client to 8.x

* Swallow the exception and make non-connectable client when ES client raises connection refuses exception.

---------

Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Mashhur <[email protected]>

* Removed unused configHash computation that can be replaced by PipelineConfig.configHash() (#17336) (#17345)

Removed unused configHash computation happening in AbstractPipeline and used only in tests replaced by PipelineConfig.configHash() invocation

(cherry picked from commit 787fd2c)

Co-authored-by: Andrea Selva <[email protected]>

* Use org.logstash.common.Util to hashing by default to SHA256 (#17346) (#17352)

Removes the usage fo Apache Commons Codec MessgeDigest to use internal Util class with embodies hashing methods.

(cherry picked from commit 9c0e50f)

Co-authored-by: Andrea Selva <[email protected]>

* Added test to verify the int overflow happen (#17353) (#17354)

Use long instead of int type to keep the length of the first token.

The size limit validation requires to sum two integers, one with the length of the accumulated chars till now plus the next fragment head part. If any of the two sizes is close to the max integer it generates an overflow and could successfully fail the test https://github.com/elastic/logstash/blob/9c0e50faacc4700da3dc84a3ba729b84bff860a8/logstash-core/src/main/java/org/logstash/common/BufferedTokenizerExt.java#L123.

To fall in this case it's required that sizeLimit is bigger then 2^32 bytes (2GB) and data fragments without any line delimiter is pushed to the tokenizer with a total size close to 2^32 bytes.

(cherry picked from commit afde43f)

Co-authored-by: Andrea Selva <[email protected]>

* [8.x] add ci shared qualified-version script (backport #17311) (#17348)

* add ci shared qualified-version script (#17311)

* ci: add shareable script for generating qualified version

* ci: use shared script to generate qualified version

(cherry picked from commit 10b5a84)

# Conflicts:
#	.buildkite/scripts/dra/build_docker.sh

* resolve merge conflict

---------

Co-authored-by: Rye Biesemeyer <[email protected]>

* tests: make integration split quantity configurable (#17219) (#17367)

* tests: make integration split quantity configurable

Refactors shared splitter bash function to take a list of files on stdin
and split into a configurable number of partitions, emitting only those from
the currently-selected partition to stdout.

Also refactors the only caller in the integration_tests launcher script to
accept an optional partition_count parameter (defaulting to `2` for backward-
compatibility), to provide the list of specs to the function's stdin, and to
output relevant information about the quantity of partition splits and which
was selected.

* ci: run integration tests in 3 parts

(cherry picked from commit 3e0f488)

Co-authored-by: Rye Biesemeyer <[email protected]>

* Update buildkite with new patterns from 8.x

This commit updates the buildkite definitions to be compatible with the
upstream 8.x branch. Specificially:
 - Split integration tests for fips into 3 runners.
 - Use the new shared bash helper for computing QUALIFIED_VERSION

It also continues standardization of using a "fedrampHighMode" for indicating
the tests should be running in the context of our custom image for the SRE team.

* Bug fix: Actually use shared integration_tests.sh file

After refactoring to use the same script, I forgot to actually use it
in the buildkite definition...

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: kaisecheng <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ry Biesemeyer <[email protected]>
Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Andrea Selva <[email protected]>
Co-authored-by: Rob Bavey <[email protected]>
Co-authored-by: Mashhur <[email protected]>

* Pin rubocop-ast development gem due to new dep on prism (#17407) (#17433)

The rubocop-ast gem just introduced a new dependency on prism.
 - https://rubygems.org/gems/rubocop-ast/versions/1.43.0

In our install default gem rake task we are seeing issues trying to build native
extensions. I see that in upstream jruby they are seeing a similar problem (at
least it is the same failure mode jruby/jruby#8415

This commit pins rubocop-ast to 1.42.0 which is the last version that did not
have an explicit prism dependency.

(cherry picked from commit 6de59f2)

Co-authored-by: Cas Donoghue <[email protected]>

* Add age filter fedramp (#17434)

* net-zero-change refactor

* add logstash-filter-age to observabilitySRE artifact

* Add licenses for bouncycastle fips jars (#17406)

This commit adds licences for bouncycastle jars that are added for the
observability SRE container artifact. It re-enables the previously disabled
license check and adds a new one running in fips mode.

* Publish Observability SRE images to internal container registry (#17401)

* POC for publishing observability SRE images

This commit adds a step to the pull_request_pipeline buildkite definition to
push a docker image to the elastic container registry. It is added here to show
that we have the proper creds etc in CI to push the container where it needs to
go. We will likely move this into the DRA pipeline once we are confident it is
pushing to the correct place with a naming convention that works for all
consumers/producers.

The general idea is to build the container with our gradle task, then once we
have that image we can tag it with the git sha and a "latest" identifier. This
would allow consumers to choose between an exact sha for a stream like 8.19.0 or
the "latest". I will also need to factor in the case where we have the tag
*without* the sha postfix. Obviously we will want to fold this in to the existing DRA
pipeline for building/staging images but for now it seems reasonable to handle
this separately.

* check variable resolution

* Move POC code into DRA pipeline

This commit takes the POC from the pull_request_pipeline and adds it to the DRA
pipeline. Noteably, we take care to not disrupt anything about the existing DRA
pipeline by making this wait until after the artifacts are published and we set
a soft_fail. While this is being introduced and stabilized we want to ensure the
existing DRA pipeline continues to work without interruption. As we get more
stability we can look at a tigther integration.

* Disambiguate architectures

Eventually we will want to do proper annotations with manifests but for now
just add arch to the tag.

* Use docker manifest for multi-architecture builds

This commit refactors the POC pipeline for pushing observabilty SRE containers
to handle conflicts for tags based on target architectures. Cells with
respective architectures build containers and push to the container registry
with a unique identifier. Once those exist we introduce a separate step to use
the docker manifest command to annotate those images such that a container
client can download the correct image based on architecture. As a result for
every artifact there will be 2 images pushed (one for each arch) and N manifests
pushed. The manifests will handle the final naming that the consumer would
expect.

* Refactor docker naming scheme

In order to follow more closely the existing tagging scheme this commit
refactors the naming for images to include the build sha BEFORE the SNAPSHOT
identifier. WHile this does not exactly follow the whole system that exists
today for container images in DRA it follows a pattern that is more similar.
Ideally we can iterate to fold handling of this container into DRA and in that
case consumers would not need to update their patterns for identifying images.

* Code review refactor

Rename INCLUDE_SHA to INCLUDE_COMMIT_ID in qualified-version script.
Confine use of this argument to individual invocations instead at top level in scripts.

* Build observabilitySRE containers after DRA is published

This gates build/push for observability SRE containers on success of DRA pipeline.

* x-pack: add fips validation plugin from x-pack (#16940)

* x-pack: add fips_validation plugin to be included in fips builds

The `logstash-integration-fips_validation` plugin provides no runtime
pipeline plugins, but instead provides hooks to ensure that the logstash
process is correctly configured for compliance with FIPS 140-3.

It is installed while building the observabilitySRE artifacts.

* fips validation: ensure BCFIPS,BCJSSE,SUN are first 3 security providers

* remove re-injection of BCFIPS jars

* Update lib/bootstrap/rubygems.rb

* add integration spec for fips_validation plugin

* add missing logstash_plugin helper

* fixup

* skip non-fips spec on fips-configured artifact, add spec details

* Improve smoke tests for observability SRE image (#17486)

* Improve smoke tests for observability SRE image

This commit adds a new rspec test to run the observability SRE container in a
docker compose network with filebeat and elasticsearch. It uses some simple test
data through a pipeline with plugins we expect to be used in production. The
rspec tests will ensure the test data is flowing from filebeat to logstash to
elasticsearch by querying elasticsearch for expected transformed data.

* REVERT ME: debug whats goig on in CI :(

* Run filebeat container as root

* Work around strict file ownership perms for filebeat

We add the filebeat config in a volume, the permissions checks fail due test
runner not being a root user. This commit disables that check in filebeat as
seems to be the consensus solution online for example: https://event-driven.io/en/tricks_on_how_to_set_up_related_docker_images/

* Dynaimcally generate PKI instead of checking it in

Instead of checking in PKI, dynamically generate it with gradle task for
starting containers and running the tests. This improvement avoids github
warning of checked in keys and avoid expiration headaches. Generation is very
fast and does not add any significant overhead to test setup.

* Remove use of "should" in rspec docstrings

see https://github.com/rubocop/rspec-style-guide?tab=readme-ov-file#should-in-example-docstrings

* Ensure permissions readable for volume

Now that certs are dynamically generated, ensure they are able to be read in container

* Use elasticsearch-fips image for smoke testing

* Add git ignore for temp certs

* Fix naming convention for integration tests

Co-authored-by: Rye Biesemeyer <[email protected]>

* Use parameter expansion for FEDRAMP_HIGH_MODE

Co-authored-by: Rye Biesemeyer <[email protected]>

* Use parameter expansion for FEDRAMP_HIGH_MODE

Co-authored-by: Rye Biesemeyer <[email protected]>

* Use parameter expansion for FEDRAMP_HIGH_MODE

Co-authored-by: Rye Biesemeyer <[email protected]>

---------

Co-authored-by: Ry Biesemeyer <[email protected]>
Co-authored-by: Ry Biesemeyer <[email protected]>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: kaisecheng <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Mashhur <[email protected]>
Co-authored-by: Andrea Selva <[email protected]>
Co-authored-by: Rob Bavey <[email protected]>
Co-authored-by: Mashhur <[email protected]>

NOTE: we decided to squash these commits as the feature branch had cherry-picks (and squshed change sets 182f15e ) from 8.x which would potentially make the commit history confusing. We determined that the benefit of having individual commits from the feature branch was outweighed by the potentially confusing git history. This will also make porting this bit of work to other streams more simple.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-8.x Automated backport to the 8.x branch with mergify backport-8.17 Automated backport with mergify backport-8.18 Automated backport with mergify backport-9.0 Automated backport to the 9.0 branch with mergify
Projects
None yet
Development

Successfully merging this pull request may close these issues.

migrate Logstash core to elasticsearch-ruby 9.x
4 participants