You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: add option to read Gitlab Runner Registration token from SSM (#822)
## Description
Adds the ability to read the Gitlab registration token from SSM. If no
registration token is passed in, it will look in SSM to find the token
to use. This prevents the token from being leaked as part of the
user_data.
```hcl
module "gitlab_runner" {
# ...
gitlab_runner_registration_config = {
registration_token = "" # this is the default value too
# ...
}
secure_parameter_store_gitlab_runner_registration_token_name = "name-of-ssm-parameter-holding-the-registration-token"
```
Closes#776
Precondition for #186 to get rid of pre-registered runners.
## Migrations required
NO
## Verification
I modified the runner-default example to not pass in a registration
token and added the token to SSM instead. Then I started up the runner
and confirmed that it successfully registered with Gitlab.
---------
Co-authored-by: Matthias Kay <[email protected]>
Co-authored-by: Matthias Kay <[email protected]>
The registration token can also be read in via SSM parameter store. If no registration token is passed in, the module
174
+
will look up the token in the SSM parameter store at the location specified by `secure_parameter_store_gitlab_runner_registration_token_name`.
175
+
173
176
For migration to the new setup simply add the runner token to the parameter store. Once the runner is started it will lookup the
174
177
required values via the parameter store. If the value is `null` a new runner will be registered and a new token created/stored.
175
178
@@ -380,12 +383,18 @@ module "runner" {
380
383
381
384
### Scenario: Use of Spot Fleet
382
385
383
-
Since spot instances can be taken over by AWS depending on the instance type and AZ you are using, you may want multiple instances types in multiple AZs. This is where spot fleets come in, when there is no capacity on one instance type and one AZ, AWS will take the next instance type and so on. This update has been possible since the [fork](https://gitlab.com/cki-project/docker-machine/-/tree/v0.16.2-gitlab.19-cki.2) of docker-machine supports spot fleets.
386
+
Since spot instances can be taken over by AWS depending on the instance type and AZ you are using, you may want multiple instances
387
+
types in multiple AZs. This is where spot fleets come in, when there is no capacity on one instance type and one AZ, AWS will take
388
+
the next instance type and so on. This update has been possible since the [fork](https://gitlab.com/cki-project/docker-machine/-/tree/v0.16.2-gitlab.19-cki.2)
389
+
of docker-machine supports spot fleets.
384
390
385
-
We have seen that the [fork](https://gitlab.com/cki-project/docker-machine/-/tree/v0.16.2-gitlab.19-cki.2) of docker-machine this module is using consume more RAM using spot fleets.
386
-
For comparison, if you launch 50 machines in the same time, it consumes ~1.2GB of RAM. In our case, we had to change the `instance_type` of the runner from `t3.micro` to `t3.small`.
391
+
We have seen that the [fork](https://gitlab.com/cki-project/docker-machine/-/tree/v0.16.2-gitlab.19-cki.2) of docker-machine this
392
+
module is using consume more RAM using spot fleets.
393
+
For comparison, if you launch 50 machines in the same time, it consumes ~1.2GB of RAM. In our case, we had to change the
394
+
`instance_type` of the runner from `t3.micro` to `t3.small`.
0 commit comments