Skip to content

Findings to run Windows worker runners #1233

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
zillemarco opened this issue Jan 28, 2025 · 9 comments
Open

Findings to run Windows worker runners #1233

zillemarco opened this issue Jan 28, 2025 · 9 comments

Comments

@zillemarco
Copy link
Contributor

I'm using this issue to jot down my findings while setting up this module to manage our Windows-based runners so that we can maybe add these to the docs.

Private Key is necessary

As mentioned in #1172 (comment), the private key is necessary in order for the manager to connect to the worker nodes.

This is already being addressed by #1232 and, once merged, it will be configured using:

module "runner" {
  runner_worker = {
    use_private_key = true
  }
}

Change the default user to Administrator

To connect to the worker nodes the default user is set to ec2-user. In my case the AMI was set up to use the Administrator user (not sure about other AMIs).

This can be configured using:

module "runner" {
  runner_worker_docker_autoscaler = {
    connector_config_user = "Administrator"
  }
}

Enable Powershell path resolver

This is mentioned on the first example of the AWS fleeting plugin (https://docs.gitlab.com/runner/executors/docker_autoscaler.html#example-aws-autoscaling-for-1-job-per-instance):

  # uncomment for Windows AMIs when the Runner manager is hosted on Linux
  # environment = ["FF_USE_POWERSHELL_PATH_RESOLVER=1"]

This can be configured using:

module "runner" {
  runner_worker = {
    environment_variables = [
      "FF_USE_POWERSHELL_PATH_RESOLVER=1"
    ]
  }
}

Enable public IPs for the worker nodes

Not really sure why, but without a public IP the jobs would always fail with this error:

ERROR: Job failed: failed to pull image "<audit>" with specified policies [always]: Error response from daemon: Get "https://<audit>/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) (manager.go:251:15s)

To fix this I enabled public IPs with this configuration:

module "runner" {
  runner_worker_docker_autoscaler_instance = {
    private_address_only = false
  }
}

Change the cache path (if cache is enabled)

By default, the cache is set up to point to the /cache path, which is not valid on Windows. Changing it to C:/cache fixed it and it can be configured with these settings:

module "runner" {
  runner_worker_docker_options = {
    volumes = ["C:/cache"]
  }
}

Note: I tried setting it to C:\cache (more Windows-like) but it failed anyway. Windows likes slashes both ways, so it's not a huge deal.

Disable privileged mode

By default, privileged mode is enabled but Windows does not support that so I had to disable it.

In order to do that, this configured can be used:

module "runner" {
  runner_worker_docker_options = {
    privileged = false
  }
}
@zillemarco
Copy link
Contributor Author

Hi @kayman-mk. these are all the settings I found to be necessary in order to make the module work with Windows runners 🙂

If you think that the findings are applicable to (almost) everyone, I this it'd be beneficial to include them somewhere in the docs since now, with docker-autoscaler, managing Windows runners is also a thing with this module 🙂

@kayman-mk
Copy link
Collaborator

kayman-mk commented Jan 30, 2025

Nice documentation!

What about adding an example configuration for that?

@zillemarco
Copy link
Contributor Author

@kayman-mk to provide an example configuration won't be an issue, I have mine working so it's just a matter of stripping out the unnecessary/proprietary bits 🙂

For the documentation, I'm not too sure where to put it 🤔 I mean, should we just add a "Windows" section to the usage.md file?

@kayman-mk
Copy link
Collaborator

Seems to be a good place as the other examples are also referenced there.

@zillemarco zillemarco changed the title FIndings to run Windows worker runners Findings to run Windows worker runners Jan 30, 2025
@pysiekytel
Copy link

Hello @zillemarco! Thanks for above hints.

Could you also provide full configuration for windows runners that is using this module? I'm curious how you achieved that

@zillemarco
Copy link
Contributor Author

Hello @zillemarco! Thanks for above hints.

Could you also provide full configuration for windows runners that is using this module? I'm curious how you achieved that

Hi @pysiekytel, what do you mean? A full TF example of how I set up the module or an example of how I set up the Windows AMI for the runners? Or both? 🙂

@pysiekytel
Copy link

A full TF example of how I set up the module or an example of how I set up the Windows AMI for the runners? Or both?

Both 😅 just looking for inspiration on how people maintain this :) It does not have to be a full code for AMI creation, but just some hints of what are you using and later how you're making use of it here in module :)

@zillemarco
Copy link
Contributor Author

Both 😅 just looking for inspiration on how people maintain this :) It does not have to be a full code for AMI creation, but just some hints of what are you using and later how you're making use of it here in module :)

Sure, I have scheduled some time tomorrow to write the documentation/example I meant last month 😅 Sorry but I've been swallowed by a lot of things at work and this totally slipped through 😅

In the meantime, to create the AMI I used packed with a very basic setup, I'll see if I can share the packer script in a usable way 🙂

@zillemarco
Copy link
Contributor Author

@pysiekytel sorry for the delay, I have opened #1268 to add the example for both the module configuration and a Packer script to create the Windows AMI. You can have a look at that whilst it's getting reviewed 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants