{product-title} can be configured to access an AWS EC2 infrastructure, including using AWS volumes as persistent storage for application data. After you configure AWS, some additional configurations must be completed on the {product-title} hosts.
Important
|
|
When installing {product-title} on AWS, ensure that you set up the appropriate security groups. install_config/topics/configuring_a_security_group.adoc
In AWS, situations that require overriding the variables include:
Variable | Usage |
---|---|
|
The user is installing in a VPC that is not configured for both |
|
You have multiple network interfaces configured and want to use one other than the default. |
|
|
|
|
For EC2 hosts in particular, they must be deployed in a VPC that has both
DNS host names
and DNS resolution
enabled.
To set the required AWS variables, create a /etc/origin/cloudprovider/aws.conf file with the following contents on all of your {product-title} hosts, both masters and nodes:
[Global] Zone = us-east-1c (1)
-
This is the Availability Zone of your AWS Instance and where your EBS Volume resides; this information is obtained from the AWS Management Console.
You can set the AWS configuration on {product-title} in two ways:
During cluster installations, AWS can be configured using the
openshift_cloudprovider_aws_access_key
,
openshift_cloudprovider_aws_secret_key
, openshift_cloudprovider_kind
,
openshift_clusterid
parameters, which are configurable in the
inventory file.
# Cloud Provider Configuration # # Note: You may make use of environment variables rather than store # sensitive configuration within the ansible inventory. # For example: #openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}" #openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}" # #openshift_clusterid=unique_identifier_per_availablility_zone # # AWS (Using API Credentials) #openshift_cloudprovider_kind=aws #openshift_cloudprovider_aws_access_key=aws_access_key_id #openshift_cloudprovider_aws_secret_key=aws_secret_access_key # # AWS (Using IAM Profiles) #openshift_cloudprovider_kind=aws # Note: IAM roles must exist before launching the instances.
Note
|
When Ansible configures AWS, it automatically makes the necessary changes to the following files:
|
Edit or
create
the master configuration file on all masters
(/etc/origin/master/master-config.yaml by default) and update the contents
of the apiServerArguments
and controllerArguments
sections:
kubernetesMasterConfig:
...
apiServerArguments:
cloud-provider:
- "aws"
cloud-config:
- "/etc/origin/cloudprovider/aws.conf"
controllerArguments:
cloud-provider:
- "aws"
cloud-config:
- "/etc/origin/cloudprovider/aws.conf"
Currently, the nodeName
must match the instance name in AWS in order
for the cloud provider integration to work properly. The name must also be
RFC1123 compliant.
Important
|
When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, aws.conf should be in /etc/origin/ instead of /etc/. |
Edit the appropriate node configuration map
and update the contents of the kubeletArguments
section:
kubeletArguments:
cloud-provider:
- "aws"
cloud-config:
- "/etc/origin/cloudprovider/aws.conf"
Important
|
When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, aws.conf should be in /etc/origin/ instead of /etc/. |
If you configure AWS provider credentials, you must also ensure that all hosts are labeled.
To correctly identify which resources are associated with a cluster, tag
resources with the key kubernetes.io/cluster/<clusterid>
, where:
-
<clusterid>
is a unique name for the cluster.
Set the corresponding value to owned
if the node belongs exclusively to the
cluster or to shared
if it is a resource shared with other systems.
Tagging all resources with the kubernetes.io/cluster/<clusterid>,Value=(owned|shared)
tag avoids potential issues with multiple zones or multiple clusters.
See Pods and Services to learn more about labeling and tagging in {product-title}.
There are four types of resources that need to be tagged:
-
Instances
-
Security Groups
-
Load Balancers
-
EBS Volumes
A cluster uses the value of the
kubernetes.io/cluster/<clusterid>,Value=(owned|shared)
tag to determine which
resources belong to the AWS cluster. This means that all relevant resources must
be labeled with the kubernetes.io/cluster/<clusterid>,Value=(owned|shared)
tag using the same values for that key. These resources include:
-
All hosts.
-
All relevant load balancers to be used in the AWS instances.
-
All EBS volumes. The EBS Volumes that need to be tagged can found with:
$ oc get pv -o json|jq '.items[].spec.awsElasticBlockStore.volumeID'
-
All relevant security groups to be used with the AWS instances.
NoteDo not tag all existing security groups with the
kubernetes.io/cluster/<name>,Value=<clusterid>
tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer.
After tagging any resources, restart the master services on the master and restart the node service on all nodes. See the Applying Configuration Section.