|
| 1 | +# Multi-Cluster centralized hub-spoke topology |
| 2 | + |
| 3 | +This example uses IRSA |
| 4 | + |
| 5 | +This example deploys ArgoCD on the Hub cluster (ie. management/control-plane cluster). |
| 6 | +The spoke clusters are registered as remote clusters in the Hub Cluster's ArgoCD |
| 7 | +The ArgoCD on the Hub Cluster deploy addons and workloads to the spoke clusters |
| 8 | + |
| 9 | +Each spoke cluster gets deployed an app of apps ArgoCD Application with the name `workloads-${env}` |
| 10 | + |
| 11 | +## Prerequisites |
| 12 | +Before you begin, make sure you have the following command line tools installed: |
| 13 | +- git |
| 14 | +- terraform |
| 15 | +- kubectl |
| 16 | +- argocd |
| 17 | + |
| 18 | +## Fork the Git Repositories |
| 19 | + |
| 20 | +### Fork the Addon GitOps Repo |
| 21 | +1. Fork the git repository for addons [here](https://github.com/gitops-bridge-dev/gitops-bridge-argocd-control-plane-template). |
| 22 | +2. Update the following environment variables to point to your fork by changing the default values: |
| 23 | +```shell |
| 24 | +export TF_VAR_gitops_addons_org=https://github.com/gitops-bridge-dev |
| 25 | +export TF_VAR_gitops_addons_repo=gitops-bridge-argocd-control-plane-template |
| 26 | +``` |
| 27 | + |
| 28 | +## Deploy the Hub EKS Cluster |
| 29 | +Change Director to `hub` |
| 30 | +```shell |
| 31 | +cd hub |
| 32 | +``` |
| 33 | +Initialize Terraform and deploy the EKS cluster: |
| 34 | +```shell |
| 35 | +terraform init |
| 36 | +terraform apply -auto-approve |
| 37 | +``` |
| 38 | +Retrieve `kubectl` config, then execute the output command: |
| 39 | +```shell |
| 40 | +terraform output -raw configure_kubectl |
| 41 | +``` |
| 42 | + |
| 43 | +### Monitor GitOps Progress for Addons |
| 44 | +Wait until **all** the ArgoCD applications' `HEALTH STATUS` is `Healthy`. Use Crl+C to exit the `watch` command |
| 45 | +```shell |
| 46 | +watch kubectl get applications -n argocd |
| 47 | +``` |
| 48 | + |
| 49 | +## Access ArgoCD on Hub Cluster |
| 50 | +Access ArgoCD's UI, run the command from the output: |
| 51 | +```shell |
| 52 | +terraform output -raw access_argocd |
| 53 | +``` |
| 54 | + |
| 55 | +## Verify that ArgoCD Service Accouts has the annotation for IRSA |
| 56 | +```shell |
| 57 | +kubectl get sa -n argocd argocd-application-controller -o json | jq '.metadata.annotations."eks.amazonaws.com/role-arn"' |
| 58 | +kubectl get sa -n argocd argocd-server -o json | jq '.metadata.annotations."eks.amazonaws.com/role-arn"' |
| 59 | +``` |
| 60 | +The output should match the `arn` for the IAM Role that will assume the IAM Role in spoke/remote clusters |
| 61 | +```text |
| 62 | +"arn:aws:iam::0123456789:role/hub-spoke-control-plane-argocd-hub" |
| 63 | +``` |
| 64 | + |
| 65 | +## Deploy the Spoke EKS Cluster |
| 66 | +Initialize Terraform and deploy the EKS clusters: |
| 67 | +```shell |
| 68 | +cd ../spokes |
| 69 | +./deploy.sh dev |
| 70 | +./deploy.sh staging |
| 71 | +./deploy.sh prod |
| 72 | +``` |
| 73 | +Each environment uses a Terraform workspace |
| 74 | + |
| 75 | +To access Terraform output run the following commands for the particular environment |
| 76 | +```shell |
| 77 | +terraform workspace select dev |
| 78 | +terraform output |
| 79 | +``` |
| 80 | +```shell |
| 81 | +terraform workspace select staging |
| 82 | +terraform output |
| 83 | +``` |
| 84 | +```shell |
| 85 | +terraform workspace select prod |
| 86 | +terraform output |
| 87 | +``` |
| 88 | +
|
| 89 | +Retrieve `kubectl` config, then execute the output command: |
| 90 | +```shell |
| 91 | +terraform output -raw configure_kubectl |
| 92 | +``` |
| 93 | +
|
| 94 | +### Verify ArgoCD Cluster Secret for Spoke has the correct IAM Role to be assume by Hub Cluster |
| 95 | +```shell |
| 96 | +kubectl get secret -n argocd hub-spoke-dev --template='{{index .data.config | base64decode}}' |
| 97 | +``` |
| 98 | +Do the same for the other cluster replaced `dev` in `hub-spoke-dev` |
| 99 | +The output have a section `awsAuthConfig` with the `clusterName` and the `roleARN` that has write access to the spoke cluster |
| 100 | +```json |
| 101 | +{ |
| 102 | + "tlsClientConfig": { |
| 103 | + "insecure": false, |
| 104 | + "caData" : "LS0tL...." |
| 105 | + }, |
| 106 | + "awsAuthConfig" : { |
| 107 | + "clusterName": "hub-spoke-dev", |
| 108 | + "roleARN": "arn:aws:iam::0123456789:role/hub-spoke-dev-argocd-spoke" |
| 109 | + } |
| 110 | +} |
| 111 | +``` |
| 112 | +
|
| 113 | +
|
| 114 | +### Verify the Addons on Spoke Clusters |
| 115 | +Verify that the addons are ready: |
| 116 | +```shell |
| 117 | +kubectl get deployment -n kube-system \ |
| 118 | + metrics-server |
| 119 | +``` |
| 120 | +
|
| 121 | +
|
| 122 | +### Monitor GitOps Progress for Workloads from Hub Cluster (run on Hub Cluster context) |
| 123 | +Watch until **all* the Workloads ArgoCD Applications are `Healthy` |
| 124 | +```shell |
| 125 | +watch kubectl get -n argocd applications |
| 126 | +``` |
| 127 | +Wait until the ArgoCD Applications `HEALTH STATUS` is `Healthy`. Crl+C to exit the `watch` command |
| 128 | +
|
| 129 | +
|
| 130 | +### Verify the Application |
| 131 | +Verify that the application configuration is present and the pod is running: |
| 132 | +```shell |
| 133 | +kubectl get all -n workload |
| 134 | +``` |
| 135 | +
|
| 136 | +### Container Metrics |
| 137 | +Check the application's CPU and memory metrics: |
| 138 | +```shell |
| 139 | +kubectl top pods -n workload |
| 140 | +``` |
| 141 | +
|
| 142 | +## Destroy the Spoke EKS Clusters |
| 143 | +To tear down all the resources and the EKS cluster, run the following command: |
| 144 | +```shell |
| 145 | +./destroy.sh dev |
| 146 | +./destroy.sh staging |
| 147 | +./destroy.sh prod |
| 148 | +``` |
| 149 | +
|
| 150 | +## Destroy the Hub EKS Clusters |
| 151 | +To tear down all the resources and the EKS cluster, run the following command: |
| 152 | +Destroy Hub Clusters |
| 153 | +```shell |
| 154 | +cd ../hub |
| 155 | +./destroy.sh |
| 156 | +``` |
0 commit comments