Skip to content

Commit f4601e9

Browse files
Merge pull request kubernetes#105 from abhinavdahiya/aws-internal
installer: internal private-only AWS OpenShift clusters
2 parents 53529d7 + d5621ab commit f4601e9

File tree

1 file changed

+186
-0
lines changed

1 file changed

+186
-0
lines changed
Lines changed: 186 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,186 @@
1+
---
2+
title: (AWS) Internal OpenShift clusters
3+
authors:
4+
- "@abhinavdahiya"
5+
reviewers:
6+
- "@wking"
7+
- "@sdodson"
8+
approvers:
9+
- "@sdodson"
10+
creation-date: 2019-11-08
11+
last-updated: 2019-11-08
12+
status: implemented
13+
superseded-by:
14+
- "https://docs.google.com/document/d/1IUCqz6AhxNwYDyx9jPcqbsa1-EloWD13OIONK7YP3JE"
15+
---
16+
17+
# aws-internal-openshift-clusters
18+
19+
## Release Signoff Checklist
20+
21+
- [ x ] Enhancement is `implementable`
22+
- [ x ] Design details are appropriately documented from clear requirements
23+
- [ x ] Test plan is defined
24+
- [ x ] Graduation criteria for dev preview, tech preview, GA
25+
- [ ] User-facing documentation is created in [openshift/docs]
26+
27+
## Summary
28+
29+
Many customer environments don't require external connectivity from the outside world and as such, they would prefer not to expose the cluster endpoints to public. Currently the OpenShift installer exposes endpoints of the cluster like Kubernetes API, or OpenShift Ingress to the Internet, although most of these endpoints can be made Internal after installation with varying degree of ease individually, creating OpenShift clusters which are Internal by default is highly desirable for users.
30+
31+
## Motivation
32+
33+
### Goals
34+
35+
Install an OpenShift cluster on AWS as internal/private, which is only accessible from my internal network and not visible to the Internet.
36+
37+
### Non-Goals
38+
39+
No additional isolation from other clusters in the network from the ones provided by [shared networking][aws-shared-networking].
40+
41+
## Proposal
42+
43+
To create Internal clusters, the installer binary needs access to the VPC where the cluster will be created to communicate with the cluster’s Kubernetes API, therefore, installing to [existing subnets][aws-shared-networking] is required. In addition to the network connectivity to the endpoints of the cluster, the installer binary should also be able to resolve the newly created DNS records for the cluster.
44+
45+
No public subnets are required, since no public load balancers will be created. And, since public records will not be needed, the requirement for a public Route 53 Zone matching the `BaseDomain` will be relaxed. The installer still creates the private R53 Zone for the cluster.
46+
47+
### User Stories
48+
49+
#### Story 1
50+
51+
#### Story 2
52+
53+
### Implementation Details/Notes/Constraints
54+
55+
#### API
56+
57+
A new field `publish` is added to the InstallConfig. Publish controls how the user facing endpoints of the cluster like the Kuberenets API, OpenShift Ingress etc. are exposed. Valid values are `External` (the default) and `Internal`.
58+
59+
#### Resource provided to the installer
60+
61+
The users provide a list of private subnets to the installer and set the `publish` to `Internal`.
62+
63+
```yaml
64+
apiVersion: v1
65+
baseDomain: example.com
66+
metadata:
67+
name: test-cluster
68+
platform:
69+
aws:
70+
region: us-west-2
71+
subnets:
72+
- subnet-1
73+
- subnet-2
74+
- subnet-3
75+
publish: Internal
76+
pullSecret: '{"auths": ...}'
77+
sshKey: ssh-ed25519 AAAA...
78+
```
79+
80+
#### Basedomain
81+
82+
The `baseDomain` will continue to be used for creating the private Route53 zone for the cluster and all the necessary records, but the requirement that a public Route53 Zone exist corresponding to the `baseDomain` will **NOT** be needed anymore.
83+
84+
The CustomResource [`DNSes.config.openshift.io`][openshift-api-config-dns] `cluster` object's `.spec.publicZone` will be set to empty. This will make sure that the operators do not create any public records for the cluster as per the API [reference][openshift-api-config-dns-empty-public-zone]
85+
86+
#### Public Subnets
87+
88+
No public subnets need to be provided to the installer.
89+
90+
#### Access to Internet
91+
92+
The cluster still continues to require access to Internet.
93+
94+
#### Resources created by the installer
95+
96+
The installer will no longer create (vs. the fully-IPI flow with pre-existing VPC):
97+
98+
- Public Load Balancers for API (6443) (aws_lb.api_external, aws_lb_target_group.api_external, aws_lb_listener.api_external_api)
99+
- No security group that allows 6443 from Internet.
100+
- Public DNS record (aws_route53_record.api_external)
101+
- No public IP address associated to bootstrap-host (associate_public_ip_address: false)
102+
- No security group that allows SSH to bootstrap-host from Internet (aws_security_group_rule.ssh)
103+
104+
The installer will continue to create (vs. the fully-IPI flow with pre-existing VPC):
105+
106+
- Private Load Balancers for Kubernetes API (6443) and Machine Config Server (22623)
107+
- DNS records in the private R53 Zone
108+
- Security Group that allows SSH to the bootstrap-host from the inside the VPC.
109+
110+
#### Bootstrap instance placement
111+
112+
The Bootstrap instance is placed in one of the provided private subnet, compared to the External case it's placed in one of the public subnet.
113+
114+
#### Operators
115+
116+
##### Ingress
117+
118+
The `default` ingresscontroller for the cluster on AWS has LoadBalancer scope set to External as per API [reference][ingresscontroller-default]. But for `Internal` clusters the `default` ingresscontroller for the cluster is explicity set to Loadbalancer scope `Internal`.
119+
120+
#### Limitations
121+
122+
- The Kubernetes API endpoints cannot be made public after installation using some configuration in the cluster. The users will have to manually choose the public subnets from the VPC where the cluster is deployed, created public load balancer with control-plane instances as backend and also ensure the control-plane security groups allow traffic from Internet on 6443 (Kubernetes API port). Also the user will have to make sure they pick public subnets in each Availability Zone as detailed in [documentation][public-lb-to-private-instances].
123+
124+
### Risks and Mitigations
125+
126+
#### Public Service type Load Balancers
127+
128+
Creating Service type Loadbalancer that are public would require users to select public subnets and tag them `kubernetes.io/cluster/<cluster-infra-id>: shared` so that cloud provider can use them to create public load balancers. The choice of public subnets is similarly strict as mentioned [limitation](#limitations).
129+
130+
#### Public OpenShift Ingress
131+
132+
Changing the OpenShift Ingress to be public after installation requires extra steps for users as detailed in previous [section](#public-service-type-load-balancers).
133+
134+
## Design Details
135+
136+
### Test Plan
137+
138+
The containers in the CI cluster that run the installer and the e2e tests will create VPN connection to public VPN endpoint created in the network deployment in the CI AWS account. This will provide the client transparent access to otherwise internal endpoints of the newly created cluster.
139+
140+
### Graduation Criteria
141+
142+
This enhancement will follow standard graduation criteria.
143+
144+
#### Dev Preview -> Tech Preview
145+
146+
- Ability to utilize the enhancement end to end
147+
- End user documentation, relative API stability
148+
- Sufficient test coverage
149+
- Gather feedback from users rather than just developers
150+
151+
#### Tech Preview -> GA
152+
153+
- Community Testing
154+
- Sufficient time for feedback
155+
- Upgrade testing from 4.3 clusters utilizing this enhancement to later releases
156+
- Downgrade and scale testing are not relevant to this enhancement
157+
158+
**For non-optional features moving to GA, the graduation criteria must include
159+
end to end tests.**
160+
161+
### Upgrade / Downgrade Strategy
162+
163+
Not applicable
164+
165+
### Version Skew Strategy
166+
167+
Not applicable.
168+
169+
## Implementation History
170+
171+
## Drawbacks
172+
173+
Customer owned networking components means the cluster cannot automatically alter those components to track evolving best-practices. The user owns those components and is responsible for maintaining them.
174+
175+
## Alternatives
176+
177+
## Infrastructure Needed
178+
179+
- Network deployment in AWS CI account which includes VPC, subnets, NAT gateways, Internet gateway that provide egress to the Internet for instance in private subnet.
180+
- VPN accessibility to the network deployment created above.
181+
182+
[aws-shared-networking]: aws-customer-provided-subnets.md
183+
[ingresscontroller-default]: https://github.com/openshift/api/blob/6feaabc7037a0688eefb36fd9f4618da7d780dda/operator/v1/types_ingress.go#L75
184+
[openshift-api-config-dns]: https://github.com/openshift/api/blob/6feaabc7037a0688eefb36fd9f4618da7d780dda/config/v1/types_dns.go#L23
185+
[openshift-api-config-dns-empty-public-zone]: https://github.com/openshift/api/blob/6feaabc7037a0688eefb36fd9f4618da7d780dda/config/v1/types_dns.go#L35-L40
186+
[public-lb-to-private-instances]: https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/

0 commit comments

Comments
 (0)