-
Notifications
You must be signed in to change notification settings - Fork 126
Load-balancer for virtual machines on bare metal Kubernetes clusters #837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Skipping CI for Draft Pull Request. |
6a3eb5c
to
1ba52f1
Compare
/hold still needs a few adjustments, but structure wise this is sound |
|
||
## Introduction | ||
|
||
Over the last year, kubevirt has integrated with MetalLB in order to support fault-tolerant access to an application |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Over the last year, kubevirt has integrated with MetalLB in order to support fault-tolerant access to an application | |
Over the last year, KubeVirt has integrated with MetalLB in order to support fault-tolerant access to an application |
I believe that when we refer to the KubeVirt project, we use the CapitalSnakeCase
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
## Introduction | ||
|
||
Over the last year, kubevirt has integrated with MetalLB in order to support fault-tolerant access to an application | ||
through an external IP address. As a Cluster administrator using a bare-metal cluster, now it's possible to add MetalLB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is not the use-case for MetalLB on-premise clusters? Baremetal servers can be deployed in public cloud too.
@fedepaol please keep me honest here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both "on premise" and "baremetal" may be non 100% correct (because there can be distributions deploy-able on premise with their own network loadbalancer), but on premise is probably better. If we want to be accurate, we can say "on premise without a load balancer", or "on premise clusters normally do not come with a network loadbalancer" or something along that line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps we need to be more accurate, but AFAIR according to metalLB page:
MetalLB is a load-balancer implementation for bare metal [Kubernetes](https://kubernetes.io/) clusters, using standard routing protocols.
So I think we're not miss-advertising here. no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually did not see @fedepaol 's comment when I posted mine on top of it. I'll change it accordingly of course :)
## Introducing MetalLB | ||
|
||
[MetalLB](https://metallb.universe.tf/) allows you to create Kubernetes services of type `LoadBalancer`, and provides network load-balancer | ||
implementation in clusters that don’t run on a cloud provider, such as bare-metals. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bare-metals can run on a cloud provider
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
- Layer 2 mode (ARP/NDP) | ||
Layer 2 mode does not implement Load-balancing behavior provides a failover mechanism where one node owns the `LoadBalancer` service, until the node fails and | ||
another node is chosen. In a strict sense, this is not a real load-balancing behavior, but instead it makes the IPs reachable to | ||
the local network. This method announces the IPs in ARP (for IPV4) and NDP (for IPV6) protocols over the network. From a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the local network. This method announces the IPs in ARP (for IPV4) and NDP (for IPV6) protocols over the network. From a | |
the local network. This method announces the IPs in ARP (for IPv4) and NDP (for IPv6) protocols over the network. From a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
the local network. This method announces the IPs in ARP (for IPV4) and NDP (for IPV6) protocols over the network. From a | ||
network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node. | ||
|
||
- BGP mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This renders as a single paragraph "BGP mode BGP mode provides..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep I'll change it.. thanks!
## Demo: VirtualMachine with external IP and MetalLB load balancer | ||
|
||
With the following recipe we will end up with a nginx server running on a virtualMachine, accessible outside the cluster | ||
using MetalLB loadBalancer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using MetalLB loadBalancer. | |
using MetalLB load balancer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
### Demo environment setup | ||
|
||
We are going to use [kind](https://kind.sigs.k8s.io) provider as an ephemeral Kubernetes cluster. | ||
To start it up follow this [guide](https://kind.sigs.k8s.io/docs/user/quick-start) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To start it up follow this [guide](https://kind.sigs.k8s.io/docs/user/quick-start) | |
To start it up follow this [guide](https://kind.sigs.k8s.io/docs/user/quick-start). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
### Demo environment setup | ||
|
||
We are going to use [kind](https://kind.sigs.k8s.io) provider as an ephemeral Kubernetes cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a very long guide, do I need to follow the Advanced part? Or build my own images?
Please mention which parts of the quick start should I follow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed.
|
||
### Spin up a Virtual Machine running Nginx | ||
|
||
Now it's time to start-up a virtualMachine running nginx, using this yaml: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The colon is confusing here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed.
EOF | ||
``` | ||
|
||
### Expose the virtualMachine with a typed LoaBalancer service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Expose the virtualMachine with a typed LoaBalancer service | |
### Expose the VirtualMachine with a typed LoaBalancer service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
already changed to virtual machine
de22f5a
to
48b93b9
Compare
48b93b9
to
f2495f5
Compare
/hold cancel |
f2495f5
to
dea5202
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks very good, and is easy to understand.
The section about the ARP configuration mode is a bit convoluted IMO, and I could not follow it.
## Introducing MetalLB | ||
|
||
[MetalLB](https://metallb.universe.tf/) allows you to create Kubernetes services of type `LoadBalancer`, and provides network load-balancer implementation in clusters that don’t run on a cloud provider, such as bare-metals. | ||
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 working modes: Layer 2 and BGP modes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 working modes: Layer 2 and BGP modes: | |
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes: Layer 2 and BGP: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
This mode does not implement Load-balancing behavior provides a failover mechanism where one node owns the `LoadBalancer` service, until the node fails and another node is chosen. In a strict sense, this is not a real load-balancing behavior, but instead it makes the IPs reachable to the local network. This method announces the IPs in ARP (for Ipv4) and NDP (for Ipv6) protocols over the network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node. | ||
|
||
- BGP mode | ||
This mode provides a real load-balancing behavior, By establishing BGP peering sessions with the network routers. They in turn advertise the external IPs of the `LoadBalancer` service, distributing the load over the nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This mode provides a real load-balancing behavior, By establishing BGP peering sessions with the network routers. They in turn advertise the external IPs of the `LoadBalancer` service, distributing the load over the nodes. | |
This mode provides real load-balancing behavior, by establishing BGP peering sessions with the network routers - which advertise the external IPs of the `LoadBalancer` service, distributing the load over the nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
#### Installing MetalLB on the cluster | ||
|
||
There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator). Note | ||
that you will need `cluster-admin` privileges. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to have a section before with prerequirements
? Having this privilege would be one of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator). Note | ||
that you will need `cluster-admin` privileges. | ||
You can confirm the operator is installed by entering the following command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be preferable to block until it is in the correct phase, something like:
kubectl wait -nmy-metallb-operator csv metallb-operator.v0.12.0 \
--for=jsonpath='{.status.phase}'=Succeeded --timeout=2m
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually think both options are not so great. The reason is that after installing the MetalLB operator, it takes a few moments for the csv to appear, so you'll get
$ kubectl wait -nmy-metallb-operator csv metallb-operator.v0.12.0 --for=jsonpath='{.status.phase}'=Succeeded --timeout=2m
Error from server (NotFound): clusterserviceversions.operators.coreos.com "metallb-operator.v0.12.0" not found
Moreover, your suggestion will break as soon as they do a new operator release.
I think we should keep it, as the confirmation command kubectl get csv -n my-metallb-operator
I mentioned actually comes from the MetalLB installation guide, so I figure they know best..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moreover, your suggestion will break as soon as they do a new operator release.
Using a variable would address that - and still, having the version of something as part of its name is .... not a smart move.
I think we should keep it, as the confirmation command kubectl get csv -n my-metallb-operator I mentioned actually comes from the MetalLB installation guide, so I figure they know best..
I'm just trying to make the reader's life easier: use a blocking command that unblocks when state == desiredState is more comfortable than repeating a command until the reader sees what's expected, but feel free to discard this opinion; I won't insist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moreover, your suggestion will break as soon as they do a new operator release.
Using a variable would address that - and still, having the version of something as part of its name is .... not a smart move.
I think we should keep it, as the confirmation command kubectl get csv -n my-metallb-operator I mentioned actually comes from the MetalLB installation guide, so I figure they know best..
I'm just trying to make the reader's life easier: use a blocking command that unblocks when state == desiredState is more comfortable than repeating a command until the reader sees what's expected, but feel free to discard this opinion; I won't insist.
using the version is indeed... a choice... but it's not in our hands..
I do agree that wait
is cooler, but still prefer the "get csv" option in this particular case. I will add a line that it may take a few seconds for the CSV to appear. Hope you find it ok by you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine.
EOF | ||
``` | ||
> Notes: | ||
> - If you're running a bare-metal cluster in a colocation factory, you need to first reserve this IP Address pool from your hosting provider. Alternatively, if you're running on a purely private cluster, you can use one of the private IP Address spaces (a.k.a RFC1918 addresses). Such addresses are free, and work fine as long as you’re only providing cluster services to your LAN. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is a colocation factory ? I would expect an explanation along the term whenever it is introduced.
I would also drop the word purely
. It adds nothing imo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's basically the opposite of a private cluster you spin up on a machine and you have controll over the entire network segment.
colocation factory means you share the network with others (clusters, hosts, etc..), and so you need to approach your (factory) network admin to give you a lease of IPs for metalLB to use.
An example of this is our cnv.eng cluster - you can't just grab a range of IPs and call it yours, you need to ask for a lease otherwise the DHCP can give it to one of the machines, and you'll be in trouble =) .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I rephrased and moved to prerequirements, tell me if it's more understandable now
#!/bin/bash | ||
echo "fedora" |passwd fedora --stdin | ||
sudo yum install -y nginx | ||
sudo systemctl enable nginx | ||
sudo systemctl start nginx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to see a cloud config instead of a bash script snippet:
#!/bin/bash | |
echo "fedora" |passwd fedora --stdin | |
sudo yum install -y nginx | |
sudo systemctl enable nginx | |
sudo systemctl start nginx | |
#cloud-config | |
password: fedora | |
chpasswd: { expire: False } | |
packages: | |
- nginx | |
runcmd: | |
- [ "systemctl", "enable", "--now", "nginx" ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like! done
allocateLoadBalancerNodePorts: true | ||
type: LoadBalancer | ||
ipFamilyPolicy: SingleStack | ||
sessionAffinity: None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: This is the default; it can be omitted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only sessionAffinity
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
allocateLoadBalancerNodePorts
also defaults to true: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#service-v1-core
If I were you, I'd check all of them are not the defaults.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack, thanks!
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 working modes: Layer 2 and BGP modes: | ||
|
||
- Layer 2 mode (ARP/NDP) | ||
This mode does not implement Load-balancing behavior provides a failover mechanism where one node owns the `LoadBalancer` service, until the node fails and another node is chosen. In a strict sense, this is not a real load-balancing behavior, but instead it makes the IPs reachable to the local network. This method announces the IPs in ARP (for Ipv4) and NDP (for Ipv6) protocols over the network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This mode does not implement Load-balancing behavior provides a failover mechanism where one node owns the `LoadBalancer` service, until the node fails and another node is chosen. In a strict sense, this is not a real load-balancing behavior, but instead it makes the IPs reachable to the local network. This method announces the IPs in ARP (for Ipv4) and NDP (for Ipv6) protocols over the network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node. | |
This mode - which does not implement Load-balancing behavior - provides a failover mechanism where a single node owns the `LoadBalancer` service, until it fails, triggering another node to be chosen as service owner. This configuration mode makes the IPs reachable from the local network. | |
This method announces the IPs using ARP (for Ipv4) or NDP (for Ipv6) protocols over the network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node. |
I am a bit confused about the second paragraph though ... I unfortunately cannot provide any assistance in re-writing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rephrased, tell me what you think.
08510b7
to
9a4ede2
Compare
## Introduction | ||
|
||
Over the last year, Kubevirt has integrated with MetalLB in order to support fault-tolerant access to an application through an external IP address. | ||
As a Cluster administrator using an on-prem cluster without a network load balancer, now it's possible to add MetalLB operator and gain a load-balancer capabilities (with Services of type LoadBalancer) to virtual machines. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a Cluster administrator using an on-prem cluster without a network load balancer, now it's possible to add MetalLB operator and gain a load-balancer capabilities (with Services of type LoadBalancer) to virtual machines. | |
As a Cluster administrator using an on-prem cluster without a network load balancer, now it's possible to use MetalLB operator to provide load-balancer capabilities (with Services of type LoadBalancer) to virtual machines. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
- You should have `cluster-admin` privileges on the cluster. | ||
- IP Address allocation: | ||
How you get IP address pools for MetalLB depends on your environment: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this sentence strangely positioned. Are you sure you want this to read:
IP Address allocation: How you get IP address pools for MetalLB depends on your environment:
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rephrased. what do you think?
|
||
There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator). Note | ||
that you will need `cluster-admin` privileges. | ||
You can confirm the operator is installed by entering the following command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moreover, your suggestion will break as soon as they do a new operator release.
Using a variable would address that - and still, having the version of something as part of its name is .... not a smart move.
I think we should keep it, as the confirmation command kubectl get csv -n my-metallb-operator I mentioned actually comes from the MetalLB installation guide, so I figure they know best..
I'm just trying to make the reader's life easier: use a blocking command that unblocks when state == desiredState is more comfortable than repeating a command until the reader sees what's expected, but feel free to discard this opinion; I won't insist.
9a4ede2
to
f66c3e9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Short. concise, and useful.
Thank you.
f66c3e9
to
19ff2b6
Compare
- You should have `cluster-admin` privileges on the cluster. | ||
- Getting IP Address pools allocation for MetalLB depends on your environment: | ||
- If you're running a bare-metal cluster in a shared host environment, you need to first reserve this IP Address pool from your hosting provider. | ||
- Alternatively, if you're running on a private cluster, you can use one of the private IP Address spaces (a.k.a RFC1918 addresses). Such addresses are free, and work fine as long as you’re only providing cluster services to your LAN. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried following the guide and this is where I got stuck.
I followed the Installation section, so now I have kind
. But now the blog post speaks about cluster-admin
and allocated pool. Are these requirements for kind? How do I apply them? Why is it a subsection of "Demo environment setup".
People should by able to follow through the blog-post simply by copy-pasting examples, either from here or from some other documentation.
Please stick to what is required to get the kind demo cluster running here (including the note from below "ttps://kind.sigs.k8s.io/docs/user/loadbalancer/"). If these pre-requirements are for a real (non-kind) cluster, it should be clearly stated and outside "Demo environment setup"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, these pre-requirements certainly do not fit here. Removed
Regarding the note - I will add only the necessary things needed for this demo.
Done.
- 172.18.1.1-172.18.1.16 | ||
EOF | ||
``` | ||
> Note: Since this demo is using a kind cluster, we want this range to be on the docker kind network. For more information, see [link](https://kind.sigs.k8s.io/docs/user/loadbalancer/) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This guide describes how to install MetalLB. It is confusing to list this under a section that waits for MetalLB to get installed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, this deserves a proper section. Done
### Demo environment setup | ||
|
||
We are going to use [kind](https://kind.sigs.k8s.io) provider as an ephemeral Kubernetes cluster. | ||
To start it up follow this [installation guide](https://kind.sigs.k8s.io/docs/user/quick-start/#installation). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be in the prerequirements section.
Also, could we change this to "First install kind on your machine following its installation guide". The installation is platform specific and may change, so it makes sense to refer an external source.
However, I would ask to list everything happening after installation here, starting with the commands starting a kind cluster.
If we split it like that, it would be clear "install kind however you want, then come back and we will prepare the environment"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. Done.
This demo runs Docker on Linux, which allows sending traffic directly to the load-balancer's external IP if the IP space is within the docker IP space. | ||
On macOS and Windows however, docker does not expose the docker network to the host, rendering the external IP unreachable from other kind nodes. In order to workaround this, one could expose pods and services using extra port mappings as shown in the extra port mappings section of kind's [Configuration Guide](https://kind.sigs.k8s.io/docs/user/configuration#extra-port-mappings). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once the kind installation is moved to this section, could we move this paragraph to a another subsection "External IPs on macOS and Windows"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but I think it this subsection should remain under Prerequirement
section.
Done.
|
||
This demo runs Docker on Linux, which allows sending traffic directly to the load-balancer's external IP if the IP space is within the docker IP space. | ||
On macOS and Windows however, docker does not expose the docker network to the host, rendering the external IP unreachable from other kind nodes. In order to workaround this, one could expose pods and services using extra port mappings as shown in the extra port mappings section of kind's [Configuration Guide](https://kind.sigs.k8s.io/docs/user/configuration#extra-port-mappings). | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Deploying cluster | |
... How to start kind |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
docker network inspect -f '{{.IPAM.Config}}' kind | ||
``` | ||
|
||
You should get the subclass you can set the ip range from. The output should contain a cidr such as 172.18.1.0/16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain how this pool relates to my docker installation, and is this different on macOS, windows?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pool uses the IP space provided by kind on docker, this is only relevant for Linux.
macOS, windows will need to set their own network, as mentioned on External IPs on macOS and Windows
section.
I will rephrase to make that clearer, tell me what you think.
|
||
#### Setting Address Pool to be used by the LoadBalancer | ||
|
||
In order to complete the Layer 2 mode configuration, we need to set a range of IP Addresses for the LoadBalancer to use. We want this range to be on the docker kind network, so by using this command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to complete the Layer 2 mode configuration, we need to set a range of IP Addresses for the LoadBalancer to use. We want this range to be on the docker kind network, so by using this command: | |
In order to complete the Layer 2 mode configuration, we need to set a range of IP addresses for the LoadBalancer to use. We want this range to be on the docker kind network, so by using this command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
docker network inspect -f '{{.IPAM.Config}}' kind | ||
``` | ||
|
||
You should get the subclass you can set the ip range from. The output should contain a cidr such as 172.18.1.0/16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should get the subclass you can set the ip range from. The output should contain a cidr such as 172.18.1.0/16 | |
You should get the subclass you can set the IP range from. The output should contain a cidr such as 172.18.1.0/16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
2ada556
to
4ce0e1f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worked for me, some minor comments. I'm yet to re-review the text
kind create cluster | ||
``` | ||
|
||
Then to start using the cluster: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is the following command using the cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mmm.. well if you have more than 1 clusters, you need to choose the right one in order to interact with (see here)
let me rephrase it
|
||
#### Installing MetalLB on the cluster | ||
|
||
There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator). | |
There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator) and click the Install button. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
#### Installing MetalLB on the cluster | ||
|
||
There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The OLM 0.21.1 installation command fails for me on a fresh kind cluster with
The CustomResourceDefinition "clusterserviceversions.operators.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
strange.. lemme try installing again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I see it. OLM has an open issue operator-framework/operator-lifecycle-manager#2767 on this.
well it's not blocking or anything, so I'm on the fence here. Do you think we should install MetalLB a different way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I stand corrected, it's not really working.. I'll look for another way to install MetalLB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It failed for me on OLM 0.21.1, but 0.20.0 worked ok
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved to installing via manifest.
|
||
### Access the virtual machine from outside the cluster | ||
|
||
Finally, we can check that the nginx server is accessible from outside the cluster: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a note explaining that it may take a while before the VM becomes available? Or better, show how to wait for it to become available
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a note, though it is not related to the vm being ready. It looks like the url simply takes some time to work after setting the service.
@fedepaol do you know why this happens? I wonder if there is something we can "wait" upon being ready and not simply wait fr the url to work..
title: Load-balancer for virtual machines on bare metal Kubernetes clusters | ||
description: This post illustrates setting up a virtual machine with MetalLb loadBalance service. | ||
navbar_active: Blogs | ||
pub-date: April 03 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's update this, in case we forget to update it later before publishing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
layout: post | ||
author: Ram Lavi | ||
title: Load-balancer for virtual machines on bare metal Kubernetes clusters | ||
description: This post illustrates setting up a virtual machine with MetalLb loadBalance service. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
description: This post illustrates setting up a virtual machine with MetalLb loadBalance service. | |
description: This post illustrates setting up a virtual machine with MetalLB loadBalance service. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
##### External IPs on macOS and Windows | ||
|
||
This demo runs Docker on Linux, which allows sending traffic directly to the load-balancer's external IP if the IP space is within the docker IP space. | ||
On macOS and Windows however, docker does not expose the docker network to the host, rendering the external IP unreachable from other kind nodes. In order to workaround this, one could expose pods and services using extra port mappings as shown in the extra port mappings section of kind's [Configuration Guide](https://kind.sigs.k8s.io/docs/user/configuration#extra-port-mappings). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would I then configure the address range in the MetalLB ConfigMap?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in short - the same.
This section deals not with the MetalLB ConfigMap, but with IP choosing considerations.
It's basically saying, if you're not using docker on linux, then you need to configure yourself the ip range is reachable from your pc console.
But once you made sure that IP is reachable, the configuring MetalLB ConfigMap is the same..
kind create cluster | ||
``` | ||
|
||
In order to interact with the specific cluster created: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I still don't understand what are you suggesting here and how is calling cluster-info
useful in the remainder of the post
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what's confusing is kind's terminology for clusters.
when using kind, you may have more than 1 clusters to use:
$ kind get clusters
kind
kind-2
with kubectl cluster-info --context kind-kind
you choose to work with the specific cluster you want kubectl to point to..
it's explained here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, so you are calling this to make sure that in case somebody changed the kind context before running this demo, we change it back to the implicit default name "kind-kind". Should set a specific context name while calling kind create cluster
to make the connection clear?
docker network inspect -f '{{.IPAM.Config}}' kind | ||
``` | ||
|
||
You should get the subclass you can set the IP range from. The output should contain a cidr such as 172.18.0.0/16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should get the subclass you can set the IP range from. The output should contain a cidr such as 172.18.0.0/16 | |
You should get the subclass you can set the IP range from. The output should contain a cidr such as 172.18.0.0/16. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
EOF | ||
``` | ||
|
||
#### Installing Kubevirt on the cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you move this under "Installing components"? That way we would split the post into:
- Intro
- Prerequisites
- Cluster deployment
- Components deployment
- Network resources configuration
- Network utilization
So it would run from the lowest layer to the highest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure I understand you. #### Installing Kubevirt on the cluster
is under ### Installing components
, isn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nevermind. Not sure what I meant here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I do:
- Cluster deployment: this one is creating the kind cluster
- Components deployment: this one is installing metalLB and KubeVirt in the cluster
- Network resources configuration: this one is configuring the metalLB pool
- Network utilization: this one is creating the VM and service
Does it make sense ? I share the concern @phoracek 's mentioned in https://github.com/kubevirt/kubevirt.github.io/pull/837/files#r864557203 ... That is part of the cluster configuration, and probably deserves to live in a separate section.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it. Done.
Let me know what you think
kubectl get pods -n metallb-system --watch | ||
``` | ||
|
||
#### Setting Address Pool to be used by the LoadBalancer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be a part of component installation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, but the problem is that then the nesting would be ##### Setting Address Pool to be used by the LoadBalancer
which is not supported CSS style wise..
Over the last year, Kubevirt and MetalLB have shown to be powerful duo in order to support fault-tolerant access to an application on virtual machines through an external IP address. | ||
As a Cluster administrator using an on-prem cluster without a network load-balancer, now it's possible to use MetalLB operator to provide load-balancer capabilities (with Services of type `LoadBalancer`) to virtual machines. | ||
|
||
## Introducing MetalLB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
## Introducing MetalLB | |
## MetalLB |
The headline above already says "Introduction", saying it again (despite this is not nested) seems redundant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
- First install kind on your machine following its [installation guide](https://kind.sigs.k8s.io/docs/user/quick-start/#installation). | ||
- To use kind, you will also need to [install docker](https://docs.docker.com/install/). | ||
|
||
##### External IPs on macOS and Windows |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so nested that our website does not have a CSS style defined for it. What about removing the prerequirements headline and just keeping its list under the section above. Then you can remove one level of nesting on this one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like!
50df1f2
to
762f416
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some typos. I'm ok with this PR once these are resolved
layout: post | ||
author: Ram Lavi | ||
title: Load-balancer for virtual machines on bare metal Kubernetes clusters | ||
description: This post illustrates setting up a virtual machine with MetalLB loadBalance service. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
description: This post illustrates setting up a virtual machine with MetalLB loadBalance service. | |
description: This post illustrates setting up a virtual machine with MetalLB LoadBalancer service. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
## MetalLB | ||
|
||
[MetalLB](https://metallb.universe.tf/) allows you to create Kubernetes services of type `LoadBalancer`, and provides network load-balancer implementation in on-prem clusters that don’t run on a cloud provider. | ||
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes: Layer 2 and BGP: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes: Layer 2 and BGP: | |
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes, Layer 2 and BGP: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
- Layer 2 mode (ARP/NDP): | ||
|
||
This mode - which actually does not implement real Load-balancing behavior - provides a failover mechanism where a single node owns the `LoadBalancer` service, until it fails, triggering another node to be chosen as the service owner. This configuration mode makes the IPs reachable from the local network. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This mode - which actually does not implement real Load-balancing behavior - provides a failover mechanism where a single node owns the `LoadBalancer` service, until it fails, triggering another node to be chosen as the service owner. This configuration mode makes the IPs reachable from the local network. | |
This mode - which actually does not implement real load-balancing behavior - provides a failover mechanism where a single node owns the `LoadBalancer` service, until it fails, triggering another node to be chosen as the service owner. This configuration mode makes the IPs reachable from the local network. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
#### Installing MetalLB on the cluster | ||
|
||
There are [many ways](https://metallb.universe.tf/installation/) to install MetalLB. For the sake of this example, we will install MetalLB via Manifests. To do this, follow this [guide](https://metallb.universe.tf/installation/#installation-by-manifest). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are [many ways](https://metallb.universe.tf/installation/) to install MetalLB. For the sake of this example, we will install MetalLB via Manifests. To do this, follow this [guide](https://metallb.universe.tf/installation/#installation-by-manifest). | |
There are [many ways](https://metallb.universe.tf/installation/) to install MetalLB. For the sake of this example, we will install MetalLB via manifests. To do this, follow this [guide](https://metallb.universe.tf/installation/#installation-by-manifest). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
EOF | ||
``` | ||
|
||
### Expose the virtual machine with a typed `LoaBalancer` service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Expose the virtual machine with a typed `LoaBalancer` service | |
### Expose the virtual machine with a typed `LoadBalancer` service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! Done
|
||
### Spin up a Virtual Machine running Nginx | ||
|
||
Now it's time to start-up a virtual machine running nginx using this yaml: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now it's time to start-up a virtual machine running nginx using this yaml: | |
Now it's time to start-up a virtual machine running nginx using the following yaml. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
### Expose the virtual machine with a typed `LoaBalancer` service | ||
|
||
When creating the `LoadBalancer` typed service, we need to remember annotating the address-pool we want to use | ||
`addresspool-sample1` and also add the selector `metallb-service: nginx` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
`addresspool-sample1` and also add the selector `metallb-service: nginx` | |
`addresspool-sample1` and also add the selector `metallb-service: nginx`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
EOF | ||
``` | ||
|
||
Notice that the service got assigned with an external IP from the range assigned by the PoolAddress: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notice that the service got assigned with an external IP from the range assigned by the PoolAddress: | |
Notice that the service got assigned with an external IP from the range assigned by the address pool: |
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, now that we are not using the MetalLB operator, PoolAddress
has no meaning.
Done
kubectl get service -n default metallb-nginx-svc | ||
``` | ||
|
||
Example output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Example output | |
Example output: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
curl -s -o /dev/null 172.18.1.1:5678 && echo "URL exists" | ||
``` | ||
|
||
Example output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Example output | |
Example output: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
/lgtm @maiqueb could you please take a final look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/hold
Holding in case you want to re-work the sections. You can remove the hold if you prefer @RamLavi .
EOF | ||
``` | ||
|
||
#### Installing Kubevirt on the cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I do:
- Cluster deployment: this one is creating the kind cluster
- Components deployment: this one is installing metalLB and KubeVirt in the cluster
- Network resources configuration: this one is configuring the metalLB pool
- Network utilization: this one is creating the VM and service
Does it make sense ? I share the concern @phoracek 's mentioned in https://github.com/kubevirt/kubevirt.github.io/pull/837/files#r864557203 ... That is part of the cluster configuration, and probably deserves to live in a separate section.
Signed-off-by: Ram Lavi <[email protected]>
/lgtm Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: phoracek The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What this PR does / why we need it:
Special notes for your reviewer: