This directory has terraform configuration necessary to achieve a infrastructure corresponding to the Single-cluster reference architecture for Gitpod on AWS.
This module will do the following steps:
- Creates the infrastructure using our
eks
terraform module, which does:- Setup an EKS managed cluster, along with external dependencies like database, storage and registry (if chosen)
- Sets up route53 entries for the domain name (if chosen)
- Provisioning the cluster:
- Set up cert-manager using our
cert-manager
module - Set up external-dns using our
external-dns
module - Creates a cluster-issuer using our
issuer
module
- Set up cert-manager using our
💡 If you would like to create the infrastructure orchestrating the terraform modules by yourself, you can find all the modules we support here.
Since the entire setup requires more than one terraform target to be run due to
dependencies (eg: helm provider depends on kubernetes cluster config, which is
not available until the eks
module finishes), this directory has a Makefile
with targets binding together targeted terraform calls. This document will
explain the execution of the terraform module in terms of these make
targets.
terraform
>=v1.1.0
kubectl
>=v1.20.0
jq
create IAM, set env vars, create backend bucket
Before starting the installation process, you need:
- An AWS account with Administrator access
- Setup credentials to be usable in one of the following ways:
- As environmental variables
- Copy the file
.env_sample
to.env
and update the values corresponding to your AWS user. Run:source .env
- Copy the file
- As credentials file
- As environmental variables
- Create and configure s3 bucket for terraform backend
- Create an AWS S3 bucket to store the terraform backend state
- Replace the name of the bucket in
main.tf
- currently it is set asgitpod-tf
The file terraform.tfvars
with values that will be used
by terraform to create the cluster. While some of them are fairly
straightforward like the name of the cluster(cluster_name
), others need a bit
more attention:
It is necessary to ensure that the VPC
setup will not have conflicts with
existing VPCs or has sufficiently large enough IP range so as to not run out of
them. Please check under the region's VPCs if the IP range you are choosing is
already in use. The CIDR will be split among 5 subnets and hence we recommend
/16
as prefix to allow sufficient IP ranges. The default value is: 10.100.0.0/16
If you wish to create cloud specific database, storage and registry backend to be used
with Gitpod
, leave the following 3 booleans set:
create_external_database = true
create_external_storage = true
create_external_storage_for_registry_backend = true
The corresponding resources will be created by the terraform script which
inclustes an RDS
mysql database, an S3
bucket and another S3
bucket to
be used as registry backend. By default create_external_storage_for_registry_backend
is set to false
. One can re-use the same S3
bucket for both object storage and registry backend.
The expectation is that you can use the credentials to these setups(provided later
We officially support Ubuntu images for Gitpod setup. In EKS cluster, AMI images are kubernetes version and region specific. You can find a list of AMI IDs here.
Make sure you provide the corresponding kubernetes version as a value to the
variable cluster_version
. We officially support kubernetes versions >= 1.20
.
If you are already sure of the domain name under which you want to setup Gitpod,
we recommend highly to provide the value as domain_name
. This will save a lot
of hassle in setting up route53
records to point to the cluster and
corresponding TLS certificate requests.
⚠️ We ship 4 terraform modules here and some of them have dependencies among each other (eg:cert-manager
module depends oneks
module forkubeconfig
, or thecluster-issuer
module depends oncert-manager
for the CRD). Hence a simple run ofterraform plan
orterraform apply
may lead to errors. Hence we wrap targetedterraform
operations in the following make targets. If you wish you useterraform
commands instead, please make sure you look into the Makefile to understand the target order.
-
Initialize the terraform backend with:
make init
-
Get the plan of the execution to verify the resources that are going to get created:
make plan
If the plan looks good, now you can go ahead and create the resources:
make apply
The apply
target calls the terraform
apply on eks
module, cert-manager
module, external-dns
module and cluster-issuer
module in that exact order.
The entire operation would take around 30 minutes to complete.
Upon completion, you will find a creation kubeconfig
file in the local
directory. You can try accessing the cluster using this file as follows:
export KUBECONFIG=/path/to/kubeconfig
kubectl get pods -A
You can list all the other outputs with the following command:
make output
💡 Alternatively, you can get the simple JSON output with a
terraform output
command
Once the apply process has exited successfully, we can go ahead and prepare to
setup Gitpod. If you specified the domain_name
in the terraform.tfvars
file,
the terraform module registers the module with route53
to point to the
cluster. Now you have to configure whichever provider you use to host your
domain name to route traffic to the AWS name servers. You can find these name
servers in the make output
command from above. It would be of the format:
Nameservers for the domain(to be added as NS records in your domain provider):
=================
[
"ns-1444.awsdns-52.org.",
"ns-1559.awsdns-02.co.uk.",
"ns-209.awsdns-26.com.",
"ns-969.awsdns-57.net."
]
Add the ns
records similar to the above 4 URIs as NS records under your domain
DNS management setup. Check with your domain hosting service for specific information.
If you enabled the creation of external database, storage and registry, the
above make output
command would list the credentials to connect to the
corresponding resource. Make a note of this, so as to provide the same when
setting up Gitpod via KOTS UI.
You can install KOTS
CLI to install Gitpod:
curl https://kots.io/install | bash
Run the following to get started with Gitpod installation:
export KUBECONFIG=kubeconfig
kubectl kots install gitpod/stable # you can replace `stable` with `unstable` or `beta` as per the requirement
Upon completion, you can access KOTS
UI in localhost:8800
. Here you can
proceed to configuring and intalling Gitpod. Please follow the official
documentaion to complete the Gitpod setup.
Sometimes, the pods deployed when executing kubectl kots install gitpod
fail to deploy due to issues with mounting their disks:
NAMESPACE NAME READY STATUS RESTARTS AGE
gitpod kotsadm-minio-0 0/1 ContainerCreating 0 2m28s
gitpod kotsadm-postgres-0 0/1 Init:0/2 0 2m28s
This can happen when the wrong image_id
was used in the .tfvars
file. The ID needs to respect both the region as well as the Kubernetes version and can be found here.
kubectl get pods -l component=proxy
NAME READY STATUS RESTARTS AGE
proxy-5998488f4c-t8vkh 0/1 Init 0/1 0 5m
The most likely reason is that the DNS01 challenge has yet to resolve. To fix this, make sure you have added the NS records corresponding to the route53
zone of the domain_name
added to your domain provider.
Once the DNS record has been updated, you will need to delete all Cert Manager pods to retrigger the certificate request
kubectl delete pods -n cert-manager --all
After a few minutes, you should see the https-certificate become ready.
kubectl get certificate
NAME READY SECRET AGE
https-certificates True https-certificates 5m
There is a chance that your kubeconfig has gotten expired after a specific amount of time. You can reconnect to the cluster by using:
aws eks --region <regon> update-kubeconfig --name <cluster_name>
Make sure you first delete the gitpod
resources in the cluster so things like load balancer created by the k8s service
gets deleted. Otherwise terraform will not be able to delete the VPC.
kubectl delete --now namespace gitpod
It is okay to ignore any dangling workspaces that are not deleted
Now run the terraform destroy step to cleanup all the cloud resources:
make destroy
The destroy should take around 20 minutes.