Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[🐛 Bug]: Using Healthchecks to monitor nodes causes "Binding additional locator mechanisms: relative" #2705

Closed
vcc-ehemdal opened this issue Mar 11, 2025 · 12 comments · Fixed by SeleniumHQ/selenium#15448

Comments

@vcc-ehemdal
Copy link

What happened?

We added basic healthchecks since we've been experiencing issues with Selenium lately.

The healthchecks added on the node

healthcheck:
  test: ["CMD", "/opt/bin/check-grid.sh", "--host", "selenium-hub"]
  start_period: 30s
  interval: 15s
  timeout: 5s
  retries: 4

With the healthcheck we get issues with connectivity (or something) between the node and hub.

Without the healthcheck everything works as expected.

What could cause this issue?

The setup is three machines connected on a docker stack deployed with

docker stack deploy -c docker-compose.yml grid --detach=true

Running docker exec NODE_ID /opt/bin/check-grid.sh --host selenium-hub works.

Command used to start Selenium Grid with Docker (or Kubernetes)

services:
  chrome:
    image: selenium/node-chrome:134.0
    shm_size: 2gb
    depends_on:
      - selenium-hub
    environment:
      - SE_EVENT_BUS_HOST=selenium-hub
      - SE_EVENT_BUS_PUBLISH_PORT=4442
      - SE_EVENT_BUS_SUBSCRIBE_PORT=4443
      - SE_VNC_NO_PASSWORD=true
      - SE_BROWSER_ARGS_DISABLE_SEARCH_ENGINE=--disable-search-engine-choice-screen
      - SE_BROWSER_ARGS_DISABLE_SHM=--disable-dev-shm-usage
      - SE_BROWSER_ARGS_START_MAXIMIZED=--start-maximized
      - SE_BROWSER_ARGS_DISABLE_BREAKPAD=--disable-breakpad
    deploy:
      replicas: 15
      restart_policy:
        condition: any
      placement:
        max_replicas_per_node: 5
      resources:
        limits:
          memory: 2000M
    healthcheck:
      test: ["CMD", "/opt/bin/check-grid.sh", "--host", "selenium-hub"]
      start_period: 30s
      interval: 15s
      timeout: 5s
      retries: 4
    entrypoint: bash -c 'SE_OPTS="--host $$HOSTNAME" /opt/bin/entry_point.sh'

  selenium-hub:
    image: selenium/hub:4.29
    ports:
      - "4442:4442"
      - "4443:4443"
      - "8080:4444"
    deploy:
      replicas: 1
      restart_policy:
        condition: any

Relevant log output

2025-03-11 18:37:13,099 INFO Included extra file "/etc/supervisor/conf.d/selenium-grid-hub.conf" during parsing
2025-03-11 18:37:13,102 INFO RPC interface 'supervisor' initialized
2025-03-11 18:37:13,102 INFO supervisord started with pid 8
2025-03-11 18:37:14,105 INFO spawned: 'selenium-grid-hub' with pid 9
Starting Selenium Grid Hub...
2025-03-11 18:37:14,109 INFO success: selenium-grid-hub entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
Appending Selenium option: --port 4444
Appending Selenium option: --log-level INFO
Appending Selenium option: --http-logs false
Appending Selenium option: --structured-logs false
Appending Selenium option: --reject-unsupported-caps false
Appending Selenium option: --session-request-timeout 300
Appending Selenium option: --session-retry-interval 15
Appending Selenium option: --healthcheck-interval 120
Appending Selenium option: --relax-checks true
Appending Selenium option: --bind-host false
Appending Selenium option: --config /opt/selenium/config.toml
Appending Selenium option: --tracing false
Tracing is disabled
Using JAVA_OPTS: -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/selenium/logs  -Dwebdriver.remote.enableTracing=false -Dwebdriver.httpclient.version=HTTP_1_1
18:37:14.488 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
18:37:14.492 INFO [LoggingOptions.getTracer] - Using null tracer
18:37:14.524 INFO [BoundZmqEventBus.<init>] - XPUB binding to [binding to tcp://*:4442, advertising as tcp://10.0.2.118:4442], XSUB binding to [binding to tcp://*:4443, advertising as tcp://10.0.2.118:4443]
18:37:14.571 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://10.0.2.118:4442 and tcp://10.0.2.118:4443
18:37:14.585 INFO [UnboundZmqEventBus.<init>] - Sockets created
18:37:15.586 INFO [UnboundZmqEventBus.<init>] - Event bus ready
18:37:16.140 INFO [Hub.execute] - Started Selenium Hub 4.29.0 (revision 18ae989): http://10.0.2.118:4444
18:37:18.971 INFO [Node.<init>] - Binding additional locator mechanisms: relative
18:37:19.342 INFO [Node.<init>] - Binding additional locator mechanisms: relative
18:37:19.413 INFO [Node.<init>] - Binding additional locator mechanisms: relative
18:37:19.484 INFO [Node.<init>] - Binding additional locator mechanisms: relative
18:37:19.551 INFO [Node.<init>] - Binding additional locator mechanisms: relative
---SNIP---

Operating System

Ubuntu 24.04 with Docker CE 28.0.1

Docker Selenium version (image tag)

4.29

Selenium Grid chart version (chart version)

No response

Copy link

@vcc-ehemdal, thank you for creating this issue. We will troubleshoot it as soon as we can.


Info for maintainers

Triage this issue by using labels.

If information is missing, add a helpful comment and then I-issue-template label.

If the issue is a question, add the I-question label.

If the issue is valid but there is no time to troubleshoot it, consider adding the help wanted label.

If the issue requires changes or fixes from an external project (e.g., ChromeDriver, GeckoDriver, MSEdgeDriver, W3C), add the applicable G-* label, and it will provide the correct link and auto-close the issue.

After troubleshooting the issue, please add the R-awaiting answer label.

Thank you!

@VietND96
Copy link
Member

Do you mean the healthchecks to ensure Node is able to register to Hub successfully?

@vcc-ehemdal
Copy link
Author

Something like that, yes

@VietND96
Copy link
Member

Okay, I fix this in Grid core, Node status response should return a reliable data for registration status.
Can you take a look to see whether it makes sense?

@vcc-ehemdal
Copy link
Author

It makes sense, but the issue we're experiencing is that the actual healthchecking is causing the node to not connect as it should, unsure what happens but it is not happy with being "bombarded" with /opt/bin/check-grid.sh calls.

@VietND96
Copy link
Member

Hmm, it is strange, AFAIK, the registration is happening via event bus (which is tcp:// 4442 and 4443), Node fires register event, once Node is added, Hub fires NodeAdded event back to Node to finish registration.
The healthchecks hitting via HTTP endpoint /status, I think it should not cause the interruption

@VietND96
Copy link
Member

In this case, you can add env var SE_LOG_LEVEL=FINEST to Hub and Node to see more details behind the event 18:37:19.551 INFO [Node.<init>] - Binding additional locator mechanisms: relative

@VietND96
Copy link
Member

Ah wait, can you try to remove this entry point
entrypoint: bash -c 'SE_OPTS="--host $$HOSTNAME" /opt/bin/entry_point.sh'
and moving the healthchecks to Hub
Since your usage is not an usual way

@vcc-ehemdal
Copy link
Author

I added the healthcheck to the hub and set SE_LOG_LEVEL=FINEST and here are the logs:

2025-03-19 09:38:45,243 INFO Included extra file "/etc/supervisor/conf.d/selenium-grid-hub.conf" during parsing
2025-03-19 09:38:45,251 INFO RPC interface 'supervisor' initialized
2025-03-19 09:38:45,252 INFO supervisord started with pid 9
2025-03-19 09:38:46,257 INFO spawned: 'selenium-grid-hub' with pid 10
Starting Selenium Grid Hub...
2025-03-19 09:38:46,269 INFO success: selenium-grid-hub entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
Appending Selenium option: --port 4444
Appending Selenium option: --log-level FINEST
Appending Selenium option: --http-logs false
Appending Selenium option: --structured-logs false
Appending Selenium option: --reject-unsupported-caps false
Appending Selenium option: --session-request-timeout 300
Appending Selenium option: --session-retry-interval 15
Appending Selenium option: --healthcheck-interval 120
Appending Selenium option: --relax-checks true
Appending Selenium option: --bind-host false
Appending Selenium option: --config /opt/selenium/config.toml
Appending Selenium option: --tracing false
Tracing is disabled
Using JAVA_OPTS: -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/selenium/logs  -Dwebdriver.remote.enableTracing=false -Dwebdriver.httpclient.version=HTTP_1_1
09:38:47.009 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
09:38:47.021 INFO [LoggingOptions.getTracer] - Using null tracer
09:38:47.132 INFO [BoundZmqEventBus.<init>] - XPUB binding to [binding to tcp://*:4442, advertising as tcp://10.0.1.143:4442], XSUB binding to [binding to tcp://*:4443, advertising as tcp://10.0.1.143:4443]
09:38:47.256 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://10.0.1.143:4442 and tcp://10.0.1.143:4443
09:38:47.323 INFO [UnboundZmqEventBus.<init>] - Sockets created
09:38:48.331 INFO [UnboundZmqEventBus.<init>] - Event bus ready
09:38:48.406 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = JMImplementation:type=MBeanServerDelegate
09:38:48.414 FINER [Repository.addMBean] - name = JMImplementation:type=MBeanServerDelegate
09:38:48.417 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object JMImplementation:type=MBeanServerDelegate
09:38:48.432 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered JMImplementation:type=MBeanServerDelegate
09:38:48.493 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.RuntimeImpl
09:38:48.499 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:type=Runtime
09:38:48.502 FINER [Repository.addMBean] - name = java.lang:type=Runtime
09:38:48.505 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:type=Runtime
09:38:48.507 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:type=Runtime
09:38:48.549 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for com.sun.management.internal.HotSpotThreadImpl
09:38:48.555 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:type=Threading
09:38:48.559 FINER [Repository.addMBean] - name = java.lang:type=Threading
09:38:48.561 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:type=Threading
09:38:48.566 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:type=Threading
09:38:48.615 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for com.sun.management.internal.OperatingSystemImpl
09:38:48.619 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:type=OperatingSystem
09:38:48.622 FINER [Repository.addMBean] - name = java.lang:type=OperatingSystem
09:38:48.624 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:type=OperatingSystem
09:38:48.629 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:type=OperatingSystem
09:38:48.651 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryManagerImpl
09:38:48.658 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=CodeCacheManager,type=MemoryManager
09:38:48.659 FINER [Repository.addMBean] - name = java.lang:name=CodeCacheManager,type=MemoryManager
09:38:48.661 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=CodeCacheManager,type=MemoryManager
09:38:48.663 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=CodeCacheManager,type=MemoryManager
09:38:48.664 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryManagerImpl
09:38:48.666 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=Metaspace Manager,type=MemoryManager
09:38:48.670 FINER [Repository.addMBean] - name = java.lang:name=Metaspace Manager,type=MemoryManager
09:38:48.671 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=Metaspace Manager,type=MemoryManager
09:38:48.673 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=Metaspace Manager,type=MemoryManager
09:38:48.684 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryPoolImpl
09:38:48.686 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=CodeHeap 'non-nmethods',type=MemoryPool
09:38:48.687 FINER [Repository.addMBean] - name = java.lang:name=CodeHeap 'non-nmethods',type=MemoryPool
09:38:48.689 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=CodeHeap 'non-nmethods',type=MemoryPool
09:38:48.690 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=CodeHeap 'non-nmethods',type=MemoryPool
09:38:48.692 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryPoolImpl
09:38:48.694 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=Metaspace,type=MemoryPool
09:38:48.695 FINER [Repository.addMBean] - name = java.lang:name=Metaspace,type=MemoryPool
09:38:48.699 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=Metaspace,type=MemoryPool
09:38:48.701 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=Metaspace,type=MemoryPool
09:38:48.704 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryPoolImpl
09:38:48.707 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=CodeHeap 'profiled nmethods',type=MemoryPool
09:38:48.710 FINER [Repository.addMBean] - name = java.lang:name=CodeHeap 'profiled nmethods',type=MemoryPool
09:38:48.711 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=CodeHeap 'profiled nmethods',type=MemoryPool
09:38:48.713 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=CodeHeap 'profiled nmethods',type=MemoryPool
09:38:48.714 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryPoolImpl
09:38:48.716 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=Compressed Class Space,type=MemoryPool
09:38:48.717 FINER [Repository.addMBean] - name = java.lang:name=Compressed Class Space,type=MemoryPool
09:38:48.718 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=Compressed Class Space,type=MemoryPool
09:38:48.720 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=Compressed Class Space,type=MemoryPool
09:38:48.721 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryPoolImpl
09:38:48.722 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=G1 Eden Space,type=MemoryPool
09:38:48.724 FINER [Repository.addMBean] - name = java.lang:name=G1 Eden Space,type=MemoryPool
09:38:48.726 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=G1 Eden Space,type=MemoryPool
09:38:48.728 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=G1 Eden Space,type=MemoryPool
09:38:48.731 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryPoolImpl
09:38:48.732 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=G1 Old Gen,type=MemoryPool
09:38:48.733 FINER [Repository.addMBean] - name = java.lang:name=G1 Old Gen,type=MemoryPool
09:38:48.735 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=G1 Old Gen,type=MemoryPool
09:38:48.748 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=G1 Old Gen,type=MemoryPool
09:38:48.750 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryPoolImpl
09:38:48.754 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=G1 Survivor Space,type=MemoryPool
09:38:48.756 FINER [Repository.addMBean] - name = java.lang:name=G1 Survivor Space,type=MemoryPool
09:38:48.758 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=G1 Survivor Space,type=MemoryPool
09:38:48.760 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=G1 Survivor Space,type=MemoryPool
09:38:48.764 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryPoolImpl
09:38:48.767 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=CodeHeap 'non-profiled nmethods',type=MemoryPool
09:38:48.768 FINER [Repository.addMBean] - name = java.lang:name=CodeHeap 'non-profiled nmethods',type=MemoryPool
09:38:48.770 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=CodeHeap 'non-profiled nmethods',type=MemoryPool
09:38:48.773 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=CodeHeap 'non-profiled nmethods',type=MemoryPool
09:38:48.792 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.CompilationImpl
09:38:48.795 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:type=Compilation
09:38:48.797 FINER [Repository.addMBean] - name = java.lang:type=Compilation
09:38:48.798 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:type=Compilation
09:38:48.800 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:type=Compilation
09:38:48.804 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.MemoryImpl
09:38:48.805 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:type=Memory
09:38:48.806 FINER [Repository.addMBean] - name = java.lang:type=Memory
09:38:48.807 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:type=Memory
09:38:48.807 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:type=Memory
09:38:48.813 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.ManagementFactoryHelper$PlatformLoggingImpl
09:38:48.814 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.util.logging:type=Logging
09:38:48.816 FINER [Repository.addMBean] - name = java.util.logging:type=Logging
09:38:48.816 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.util.logging:type=Logging
09:38:48.817 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.util.logging:type=Logging
09:38:48.820 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.ClassLoadingImpl
09:38:48.821 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:type=ClassLoading
09:38:48.822 FINER [Repository.addMBean] - name = java.lang:type=ClassLoading
09:38:48.822 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:type=ClassLoading
09:38:48.823 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:type=ClassLoading
09:38:48.961 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = com.sun.management:type=DiagnosticCommand
09:38:48.962 FINER [Repository.addMBean] - name = com.sun.management:type=DiagnosticCommand
09:38:48.963 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object com.sun.management:type=DiagnosticCommand
09:38:48.964 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered com.sun.management:type=DiagnosticCommand
09:38:48.967 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for com.sun.management.internal.GarbageCollectorExtImpl
09:38:48.968 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=G1 Young Generation,type=GarbageCollector
09:38:48.968 FINER [Repository.addMBean] - name = java.lang:name=G1 Young Generation,type=GarbageCollector
09:38:48.969 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=G1 Young Generation,type=GarbageCollector
09:38:48.970 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=G1 Young Generation,type=GarbageCollector
09:38:48.970 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for com.sun.management.internal.GarbageCollectorExtImpl
09:38:48.971 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=G1 Concurrent GC,type=GarbageCollector
09:38:48.971 FINER [Repository.addMBean] - name = java.lang:name=G1 Concurrent GC,type=GarbageCollector
09:38:48.971 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=G1 Concurrent GC,type=GarbageCollector
09:38:48.972 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=G1 Concurrent GC,type=GarbageCollector
09:38:48.972 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for com.sun.management.internal.GarbageCollectorExtImpl
09:38:48.973 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.lang:name=G1 Old Generation,type=GarbageCollector
09:38:48.973 FINER [Repository.addMBean] - name = java.lang:name=G1 Old Generation,type=GarbageCollector
09:38:48.974 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.lang:name=G1 Old Generation,type=GarbageCollector
09:38:48.974 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.lang:name=G1 Old Generation,type=GarbageCollector
09:38:48.977 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for com.sun.management.internal.HotSpotDiagnostic
09:38:48.978 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = com.sun.management:type=HotSpotDiagnostic
09:38:48.978 FINER [Repository.addMBean] - name = com.sun.management:type=HotSpotDiagnostic
09:38:48.979 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object com.sun.management:type=HotSpotDiagnostic
09:38:48.979 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered com.sun.management:type=HotSpotDiagnostic
09:38:48.986 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.ManagementFactoryHelper$1
09:38:48.992 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.nio:name=mapped,type=BufferPool
09:38:48.992 FINER [Repository.addMBean] - name = java.nio:name=mapped,type=BufferPool
09:38:48.998 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.nio:name=mapped,type=BufferPool
09:38:49.000 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.nio:name=mapped,type=BufferPool
09:38:49.002 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.ManagementFactoryHelper$1
09:38:49.005 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.nio:name=direct,type=BufferPool
09:38:49.005 FINER [Repository.addMBean] - name = java.nio:name=direct,type=BufferPool
09:38:49.008 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.nio:name=direct,type=BufferPool
09:38:49.017 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.nio:name=direct,type=BufferPool
09:38:49.020 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for sun.management.ManagementFactoryHelper$1
09:38:49.022 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = java.nio:name=mapped - 'non-volatile memory',type=BufferPool
09:38:49.023 FINER [Repository.addMBean] - name = java.nio:name=mapped - 'non-volatile memory',type=BufferPool
09:38:49.024 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object java.nio:name=mapped - 'non-volatile memory',type=BufferPool
09:38:49.025 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered java.nio:name=mapped - 'non-volatile memory',type=BufferPool
09:38:49.035 FINER [StandardMBean.getMBeanInfo] - Building MBeanInfo for jdk.management.jfr.FlightRecorderMXBeanImpl
09:38:49.043 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = jdk.management.jfr:type=FlightRecorder
09:38:49.043 FINER [Repository.addMBean] - name = jdk.management.jfr:type=FlightRecorder
09:38:49.047 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object jdk.management.jfr:type=FlightRecorder
09:38:49.049 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered jdk.management.jfr:type=FlightRecorder
09:38:49.081 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = org.seleniumhq.grid:type=Config,name=BaseServerConfig
09:38:49.082 FINER [Repository.addMBean] - name = org.seleniumhq.grid:type=Config,name=BaseServerConfig
09:38:49.085 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object org.seleniumhq.grid:name=BaseServerConfig,type=Config
09:38:49.088 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered org.seleniumhq.grid:type=Config,name=BaseServerConfig
09:38:49.124 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = org.seleniumhq.grid:type=Config,name=NewSessionQueueConfig
09:38:49.125 FINER [Repository.addMBean] - name = org.seleniumhq.grid:type=Config,name=NewSessionQueueConfig
09:38:49.125 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object org.seleniumhq.grid:name=NewSessionQueueConfig,type=Config
09:38:49.126 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered org.seleniumhq.grid:type=Config,name=NewSessionQueueConfig
09:38:49.162 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = org.seleniumhq.grid:type=SessionQueue,name=LocalSessionQueue
09:38:49.164 FINER [Repository.addMBean] - name = org.seleniumhq.grid:type=SessionQueue,name=LocalSessionQueue
09:38:49.165 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object org.seleniumhq.grid:name=LocalSessionQueue,type=SessionQueue
09:38:49.165 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered org.seleniumhq.grid:type=SessionQueue,name=LocalSessionQueue
09:38:49.209 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = org.seleniumhq.grid:type=Distributor,name=LocalDistributor
09:38:49.210 FINER [Repository.addMBean] - name = org.seleniumhq.grid:type=Distributor,name=LocalDistributor
09:38:49.214 FINER [DefaultMBeanServerInterceptor.registerWithRepository] - Send create notification of object org.seleniumhq.grid:name=LocalDistributor,type=Distributor
09:38:49.214 FINER [DefaultMBeanServerInterceptor.sendNotification] - JMX.mbean.registered org.seleniumhq.grid:type=Distributor,name=LocalDistributor
09:38:49.986 FINER [DefaultMBeanServerInterceptor.registerDynamicMBean] - ObjectName = org.seleniumhq.grid:type=Config,name=BaseServerConfig
09:38:49.986 FINER [Repository.addMBean] - name = org.seleniumhq.grid:type=Config,name=BaseServerConfig
09:38:50.043 DEBUG [MultithreadEventLoopGroup.<clinit>] - -Dio.netty.eventLoopThreads: 4
09:38:50.058 DEBUG [GlobalEventExecutor.<clinit>] - -Dio.netty.globalEventExecutor.quietPeriodSeconds: 1
09:38:50.097 DEBUG [InternalThreadLocalMap.<clinit>] - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
09:38:50.098 DEBUG [InternalThreadLocalMap.<clinit>] - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
09:38:50.237 DEBUG [PlatformDependent0.explicitNoUnsafeCause0] - -Dio.netty.noUnsafe: false
09:38:50.246 DEBUG [PlatformDependent0.javaVersion0] - Java version: 21
09:38:50.249 DEBUG [PlatformDependent0.<clinit>] - sun.misc.Unsafe.theUnsafe: available
09:38:50.260 DEBUG [PlatformDependent0.<clinit>] - sun.misc.Unsafe base methods: all available
09:38:50.272 DEBUG [PlatformDependent0.<clinit>] - sun.misc.Unsafe.storeFence: available
09:38:50.308 DEBUG [PlatformDependent0.<clinit>] - java.nio.Buffer.address: available
09:38:50.324 DEBUG [PlatformDependent0.<clinit>] - direct buffer constructor: unavailable
java.lang.UnsupportedOperationException: Reflective setAccessible(true) disabled
	at io.netty.util.internal.ReflectionUtil.trySetAccessible(ReflectionUtil.java:31)
	at io.netty.util.internal.PlatformDependent0$5.run(PlatformDependent0.java:332)
	at java.base/java.security.AccessController.doPrivileged(AccessController.java:319)
	at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:325)
	at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:334)
	at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:90)
	at io.netty.channel.nio.NioEventLoop.<clinit>(NioEventLoop.java:84)
	at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:182)
	at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:38)
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60)
	at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:52)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:97)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:92)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:73)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:53)
	at org.openqa.selenium.netty.server.NettyServer.<init>(NettyServer.java:102)
	at org.openqa.selenium.grid.TemplateGridServerCommand$1.<init>(TemplateGridServerCommand.java:51)
	at org.openqa.selenium.grid.TemplateGridServerCommand.asServer(TemplateGridServerCommand.java:50)
	at org.openqa.selenium.grid.commands.Hub.execute(Hub.java:240)
	at org.openqa.selenium.grid.TemplateGridCommand.lambda$configure$4(TemplateGridCommand.java:122)
	at org.openqa.selenium.grid.Main.launch(Main.java:83)
	at org.openqa.selenium.grid.Main.go(Main.java:56)
	at org.openqa.selenium.grid.Main.main(Main.java:41)
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
	at java.base/java.lang.reflect.Method.invoke(Method.java:580)
	at org.openqa.selenium.grid.Bootstrap.runMain(Bootstrap.java:77)
	at org.openqa.selenium.grid.Bootstrap.main(Bootstrap.java:70)
09:38:50.338 DEBUG [PlatformDependent0.<clinit>] - java.nio.Bits.unaligned: available, true
09:38:50.340 DEBUG [PlatformDependent0.<clinit>] - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable
java.lang.IllegalAccessException: class io.netty.util.internal.PlatformDependent0$7 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @9f70c54
	at java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:394)
	at java.base/java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:714)
	at java.base/java.lang.reflect.Method.invoke(Method.java:571)
	at io.netty.util.internal.PlatformDependent0$7.run(PlatformDependent0.java:468)
	at java.base/java.security.AccessController.doPrivileged(AccessController.java:319)
	at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:459)
	at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:334)
	at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:90)
	at io.netty.channel.nio.NioEventLoop.<clinit>(NioEventLoop.java:84)
	at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:182)
	at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:38)
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60)
	at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:52)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:97)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:92)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:73)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:53)
	at org.openqa.selenium.netty.server.NettyServer.<init>(NettyServer.java:102)
	at org.openqa.selenium.grid.TemplateGridServerCommand$1.<init>(TemplateGridServerCommand.java:51)
	at org.openqa.selenium.grid.TemplateGridServerCommand.asServer(TemplateGridServerCommand.java:50)
	at org.openqa.selenium.grid.commands.Hub.execute(Hub.java:240)
	at org.openqa.selenium.grid.TemplateGridCommand.lambda$configure$4(TemplateGridCommand.java:122)
	at org.openqa.selenium.grid.Main.launch(Main.java:83)
	at org.openqa.selenium.grid.Main.go(Main.java:56)
	at org.openqa.selenium.grid.Main.main(Main.java:41)
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
	at java.base/java.lang.reflect.Method.invoke(Method.java:580)
	at org.openqa.selenium.grid.Bootstrap.runMain(Bootstrap.java:77)
	at org.openqa.selenium.grid.Bootstrap.main(Bootstrap.java:70)
09:38:50.378 DEBUG [PlatformDependent0.<clinit>] - java.nio.DirectByteBuffer.<init>(long, {int,long}): unavailable
09:38:50.380 DEBUG [PlatformDependent.unsafeUnavailabilityCause0] - sun.misc.Unsafe: available
09:38:50.385 DEBUG [PlatformDependent.tmpdir0] - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
09:38:50.387 DEBUG [PlatformDependent.bitMode0] - -Dio.netty.bitMode: 64 (sun.arch.data.model)
09:38:50.394 DEBUG [PlatformDependent.<clinit>] - -Dio.netty.maxDirectMemory: -1 bytes
09:38:50.399 DEBUG [PlatformDependent.<clinit>] - -Dio.netty.uninitializedArrayAllocationThreshold: -1
09:38:50.409 DEBUG [CleanerJava9.<clinit>] - java.nio.ByteBuffer.cleaner(): available
09:38:50.410 DEBUG [PlatformDependent.<clinit>] - -Dio.netty.noPreferDirect: false
09:38:50.425 DEBUG [NioEventLoop.<clinit>] - -Dio.netty.noKeySetOptimization: false
09:38:50.425 DEBUG [NioEventLoop.<clinit>] - -Dio.netty.selectorAutoRebuildThreshold: 512
09:38:50.437 DEBUG [PlatformDependent$Mpsc.<clinit>] - org.jctools-core.MpscChunkedArrayQueue: available
09:38:50.442 FINEST [NioEventLoop.openSelector] - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@62315f22
09:38:50.453 FINEST [NioEventLoop.openSelector] - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@39ce27f2
09:38:50.455 FINEST [NioEventLoop.openSelector] - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@5f2afe62
09:38:50.457 FINEST [NioEventLoop.openSelector] - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@c96a4ea
09:38:50.461 FINEST [NioEventLoop.openSelector] - instrumented a special java.util.Set into: sun.nio.ch.EPollSelectorImpl@28782602
09:38:50.516 DEBUG [DefaultChannelId.<clinit>] - -Dio.netty.processId: 11 (auto-detected)
09:38:50.529 DEBUG [NetUtil.<clinit>] - -Djava.net.preferIPv4Stack: false
09:38:50.532 DEBUG [NetUtil.<clinit>] - -Djava.net.preferIPv6Addresses: false
09:38:50.537 DEBUG [NetUtilInitializations.determineLoopback] - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
09:38:50.540 DEBUG [NetUtil$SoMaxConnAction.run] - /proc/sys/net/core/somaxconn: 4096
09:38:50.547 DEBUG [DefaultChannelId.<clinit>] - -Dio.netty.machineId: 02:42:0a:ff:fe:00:01:8f (auto-detected)
09:38:50.568 DEBUG [ResourceLeakDetector.<clinit>] - -Dio.netty.leakDetection.level: simple
09:38:50.571 DEBUG [ResourceLeakDetector.<clinit>] - -Dio.netty.leakDetection.targetRecords: 4
09:38:50.609 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.numHeapArenas: 4
09:38:50.616 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.numDirectArenas: 4
09:38:50.616 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.pageSize: 8192
09:38:50.617 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.maxOrder: 9
09:38:50.617 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.chunkSize: 4194304
09:38:50.618 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.smallCacheSize: 256
09:38:50.621 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.normalCacheSize: 64
09:38:50.623 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
09:38:50.626 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.cacheTrimInterval: 8192
09:38:50.627 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
09:38:50.628 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.useCacheForAllThreads: false
09:38:50.628 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
09:38:50.629 DEBUG [PooledByteBufAllocator.<clinit>] - -Dio.netty.allocator.disableCacheFinalizersForFastThreadLocalThreads: false
09:38:50.659 DEBUG [ByteBufUtil.<clinit>] - -Dio.netty.allocator.type: pooled
09:38:50.660 DEBUG [ByteBufUtil.<clinit>] - -Dio.netty.threadLocalDirectBufferSize: 0
09:38:50.661 DEBUG [ByteBufUtil.<clinit>] - -Dio.netty.maxThreadLocalCharBufferSize: 16384
09:38:50.666 DEBUG [ChannelInitializerExtensions.getExtensions] - -Dio.netty.bootstrap.extensions: null
09:38:50.706 DEBUG [LoggingHandler.channelRegistered] - [id: 0x93e8c1f8] REGISTERED
09:38:50.712 DEBUG [LoggingHandler.bind] - [id: 0x93e8c1f8] BIND: 0.0.0.0/0.0.0.0:4444
09:38:50.719 INFO [Hub.execute] - Started Selenium Hub 4.29.0 (revision 18ae989): http://10.0.1.143:4444
09:38:50.723 DEBUG [LoggingHandler.channelActive] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] ACTIVE
09:38:55.506 DEBUG [LoggingHandler.channelRead] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ: [id: 0xe13a97d2, L:/[0:0:0:0:0:0:0:1]:4444 - R:/[0:0:0:0:0:0:0:1]:40080]
09:38:55.510 DEBUG [LoggingHandler.channelReadComplete] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ COMPLETE
09:38:55.558 DEBUG [AbstractByteBuf.<clinit>] - -Dio.netty.buffer.checkAccessible: true
09:38:55.559 DEBUG [AbstractByteBuf.<clinit>] - -Dio.netty.buffer.checkBounds: true
09:38:55.561 DEBUG [ResourceLeakDetectorFactory$DefaultResourceLeakDetectorFactory.newResourceLeakDetector] - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@68c5cd52
09:38:55.591 DEBUG [ZlibCodecFactory.<clinit>] - -Dio.netty.noJdkZlibDecoder: false
09:38:55.592 DEBUG [ZlibCodecFactory.<clinit>] - -Dio.netty.noJdkZlibEncoder: false
09:38:55.623 DEBUG [Recycler.<clinit>] - -Dio.netty.recycler.maxCapacityPerThread: 4096
09:38:55.623 DEBUG [Recycler.<clinit>] - -Dio.netty.recycler.ratio: 8
09:38:55.626 DEBUG [Recycler.<clinit>] - -Dio.netty.recycler.chunkSize: 32
09:38:55.626 DEBUG [Recycler.<clinit>] - -Dio.netty.recycler.blocking: false
09:38:55.627 DEBUG [Recycler.<clinit>] - -Dio.netty.recycler.batchFastThreadLocalOnly: true
09:38:55.658 DEBUG [RequestConverter.channelRead0] - Incoming message: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:38:55.662 DEBUG [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:38:55.680 DEBUG [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
09:38:55.681 DEBUG [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
09:38:55.827 DEBUG [RequestConverter.channelInactive] - Channel became inactive.
09:39:00.933 DEBUG [LoggingHandler.channelRead] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ: [id: 0x4b5ebd8a, L:/[0:0:0:0:0:0:0:1]:4444 - R:/[0:0:0:0:0:0:0:1]:40090]
09:39:00.935 DEBUG [LoggingHandler.channelReadComplete] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ COMPLETE
09:39:00.942 DEBUG [RequestConverter.channelRead0] - Incoming message: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:39:00.943 DEBUG [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:39:00.947 DEBUG [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
09:39:00.948 DEBUG [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
09:39:00.969 DEBUG [RequestConverter.channelInactive] - Channel became inactive.
09:39:16.087 DEBUG [LoggingHandler.channelRead] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ: [id: 0xf83d753c, L:/[0:0:0:0:0:0:0:1]:4444 - R:/[0:0:0:0:0:0:0:1]:55450]
09:39:16.088 DEBUG [LoggingHandler.channelReadComplete] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ COMPLETE
09:39:16.099 DEBUG [RequestConverter.channelRead0] - Incoming message: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:39:16.100 DEBUG [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:39:16.101 DEBUG [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
09:39:16.104 DEBUG [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
09:39:16.111 DEBUG [RequestConverter.channelInactive] - Channel became inactive.
09:39:31.279 DEBUG [LoggingHandler.channelRead] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ: [id: 0xa758f85f, L:/[0:0:0:0:0:0:0:1]:4444 - R:/[0:0:0:0:0:0:0:1]:35426]
09:39:31.281 DEBUG [LoggingHandler.channelReadComplete] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ COMPLETE
09:39:31.288 DEBUG [RequestConverter.channelRead0] - Incoming message: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:39:31.290 DEBUG [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:39:31.292 DEBUG [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
09:39:31.294 DEBUG [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
09:39:31.304 DEBUG [RequestConverter.channelInactive] - Channel became inactive.
09:39:46.412 DEBUG [LoggingHandler.channelRead] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ: [id: 0x54ab35eb, L:/[0:0:0:0:0:0:0:1]:4444 - R:/[0:0:0:0:0:0:0:1]:57638]
09:39:46.413 DEBUG [LoggingHandler.channelReadComplete] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ COMPLETE
09:39:46.415 DEBUG [RequestConverter.channelRead0] - Incoming message: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:39:46.416 DEBUG [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:39:46.418 DEBUG [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
09:39:46.418 DEBUG [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
09:39:46.423 DEBUG [RequestConverter.channelInactive] - Channel became inactive.
09:40:01.532 DEBUG [LoggingHandler.channelRead] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ: [id: 0x945564c0, L:/[0:0:0:0:0:0:0:1]:4444 - R:/[0:0:0:0:0:0:0:1]:38176]
09:40:01.536 DEBUG [RequestConverter.channelRead0] - Incoming message: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:40:01.537 DEBUG [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
GET /wd/hub/status HTTP/1.1
Host: localhost:4444
User-Agent: curl/8.5.0
Accept: */*
Authorization: Basic Og==
09:40:01.540 DEBUG [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
09:40:01.542 DEBUG [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
09:40:01.537 DEBUG [LoggingHandler.channelReadComplete] - [id: 0x93e8c1f8, L:/[0:0:0:0:0:0:0:0]:4444] READ COMPLETE
09:40:01.554 DEBUG [RequestConverter.channelInactive] - Channel became inactive.
Trapped SIGTERM/SIGINT/x so shutting down supervisord...
2025-03-19 09:40:04,562 WARN received SIGTERM indicating exit request
2025-03-19 09:40:04,563 INFO waiting for selenium-grid-hub to die
2025-03-19 09:40:05,564 WARN stopped: selenium-grid-hub (terminated by SIGTERM)
Shutdown complete

The docker-compose.yml file

services:
  chrome:
    image: selenium/node-chrome:134.0
    depends_on:
      - selenium-hub
    environment:
      - SE_EVENT_BUS_HOST=selenium-hub
      - SE_EVENT_BUS_PUBLISH_PORT=4442
      - SE_EVENT_BUS_SUBSCRIBE_PORT=4443
      - SE_VNC_NO_PASSWORD=true
      - SE_BROWSER_ARGS_DISABLE_SEARCH_ENGINE=--disable-search-engine-choice-screen
      - SE_BROWSER_ARGS_DISABLE_SHM=--disable-dev-shm-usage
      - SE_BROWSER_ARGS_DISABLE_EXTENSIONS=--disable-extensions
      - SE_BROWSER_ARGS_START_MAXIMIZED=--start-maximized
      - SE_BROWSER_ARGS_DISABLE_BREAKPAD=--disable-breakpad
      - SE_LOG_LEVEL=FINEST
    deploy:
      replicas: 3
      restart_policy:
        condition: any
      placement:
        max_replicas_per_node: 1
      resources:
        limits:
          memory: 2000M
    healthcheck:
      test: ["CMD", "/opt/bin/check-grid.sh", "--host", "selenium-hub"]
      start_period: 15s
      interval: 15s
      timeout: 5s
      retries: 5

  selenium-hub:
    image: selenium/hub:4.29
    ports:
      - "4442:4442"
      - "4443:4443"
      - "8080:4444"
      - "4444:4444"
    deploy:
      replicas: 1
      restart_policy:
        condition: any
    environment:
      - SE_LOG_LEVEL=FINEST
    healthcheck:
      test: ["CMD", "/opt/bin/check-grid.sh"]
      start_period: 15s
      interval: 15s
      timeout: 5s
      retries: 5

Deployed to the swarm using docker stack deploy -c docker-compose.yml grid --detach=true

@vcc-ehemdal
Copy link
Author

BTW: Having a healthcheck on the hub will most likely fail since I think that the hub is only healthy when it has a node connected to it. So a different healthcheck is needed for the hub.

@vcc-ehemdal
Copy link
Author

Hmm, strange, running with healthcheck on the node and without the entrypoint works locally (on a docker swarm on three different VirtualBox instances). Running the hub without healthchecks.

@vcc-ehemdal
Copy link
Author

Hmm, I no longer feel like this is a bug.

Removing the entrypoint: bash -c 'SE_OPTS="--host $$HOSTNAME" /opt/bin/entry_point.sh' helped.

Unsure what it was aimed to accomplish since a colleague added it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants