Skip to content

Load-balancer for virtual machines on bare metal Kubernetes clusters #837

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 23, 2022

Conversation

RamLavi
Copy link
Contributor

@RamLavi RamLavi commented Apr 3, 2022

What this PR does / why we need it:

Special notes for your reviewer:

@kubevirt-bot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@kubevirt-bot kubevirt-bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/L labels Apr 3, 2022
@kubevirt-bot kubevirt-bot requested review from jobbler and mazzystr April 3, 2022 14:24
@kubevirt-bot kubevirt-bot added the kind/blog Label for blog entries label Apr 3, 2022
@RamLavi RamLavi force-pushed the vm_with_metal_lb branch from 6a3eb5c to 1ba52f1 Compare April 4, 2022 09:59
@RamLavi RamLavi marked this pull request as ready for review April 4, 2022 09:59
@kubevirt-bot kubevirt-bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 4, 2022
@RamLavi RamLavi changed the title [DNM] Load-balancer for VirtualMachines on bare metal Kubernetes clusters Load-balancer for VirtualMachines on bare metal Kubernetes clusters Apr 4, 2022
@RamLavi
Copy link
Contributor Author

RamLavi commented Apr 4, 2022

/hold

still needs a few adjustments, but structure wise this is sound

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 4, 2022

## Introduction

Over the last year, kubevirt has integrated with MetalLB in order to support fault-tolerant access to an application
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Over the last year, kubevirt has integrated with MetalLB in order to support fault-tolerant access to an application
Over the last year, KubeVirt has integrated with MetalLB in order to support fault-tolerant access to an application

I believe that when we refer to the KubeVirt project, we use the CapitalSnakeCase

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

## Introduction

Over the last year, kubevirt has integrated with MetalLB in order to support fault-tolerant access to an application
through an external IP address. As a Cluster administrator using a bare-metal cluster, now it's possible to add MetalLB
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is not the use-case for MetalLB on-premise clusters? Baremetal servers can be deployed in public cloud too.

@fedepaol please keep me honest here

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both "on premise" and "baremetal" may be non 100% correct (because there can be distributions deploy-able on premise with their own network loadbalancer), but on premise is probably better. If we want to be accurate, we can say "on premise without a load balancer", or "on premise clusters normally do not come with a network loadbalancer" or something along that line.

Copy link
Contributor Author

@RamLavi RamLavi Apr 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps we need to be more accurate, but AFAIR according to metalLB page:

MetalLB is a load-balancer implementation for bare metal [Kubernetes](https://kubernetes.io/) clusters, using standard routing protocols.

So I think we're not miss-advertising here. no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually did not see @fedepaol 's comment when I posted mine on top of it. I'll change it accordingly of course :)

## Introducing MetalLB

[MetalLB](https://metallb.universe.tf/) allows you to create Kubernetes services of type `LoadBalancer`, and provides network load-balancer
implementation in clusters that don’t run on a cloud provider, such as bare-metals.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bare-metals can run on a cloud provider

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's wait to see what @fedepaol says on your previous comment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

- Layer 2 mode (ARP/NDP)
Layer 2 mode does not implement Load-balancing behavior provides a failover mechanism where one node owns the `LoadBalancer` service, until the node fails and
another node is chosen. In a strict sense, this is not a real load-balancing behavior, but instead it makes the IPs reachable to
the local network. This method announces the IPs in ARP (for IPV4) and NDP (for IPV6) protocols over the network. From a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
the local network. This method announces the IPs in ARP (for IPV4) and NDP (for IPV6) protocols over the network. From a
the local network. This method announces the IPs in ARP (for IPv4) and NDP (for IPv6) protocols over the network. From a

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

the local network. This method announces the IPs in ARP (for IPV4) and NDP (for IPV6) protocols over the network. From a
network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node.

- BGP mode
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This renders as a single paragraph "BGP mode BGP mode provides..."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep I'll change it.. thanks!

## Demo: VirtualMachine with external IP and MetalLB load balancer

With the following recipe we will end up with a nginx server running on a virtualMachine, accessible outside the cluster
using MetalLB loadBalancer.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
using MetalLB loadBalancer.
using MetalLB load balancer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

### Demo environment setup

We are going to use [kind](https://kind.sigs.k8s.io) provider as an ephemeral Kubernetes cluster.
To start it up follow this [guide](https://kind.sigs.k8s.io/docs/user/quick-start)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To start it up follow this [guide](https://kind.sigs.k8s.io/docs/user/quick-start)
To start it up follow this [guide](https://kind.sigs.k8s.io/docs/user/quick-start).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


### Demo environment setup

We are going to use [kind](https://kind.sigs.k8s.io) provider as an ephemeral Kubernetes cluster.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a very long guide, do I need to follow the Advanced part? Or build my own images?

Please mention which parts of the quick start should I follow

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.


### Spin up a Virtual Machine running Nginx

Now it's time to start-up a virtualMachine running nginx, using this yaml:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The colon is confusing here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed.

EOF
```

### Expose the virtualMachine with a typed LoaBalancer service
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Expose the virtualMachine with a typed LoaBalancer service
### Expose the VirtualMachine with a typed LoaBalancer service

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

already changed to virtual machine

@RamLavi RamLavi force-pushed the vm_with_metal_lb branch 2 times, most recently from de22f5a to 48b93b9 Compare April 5, 2022 11:20
@RamLavi RamLavi changed the title Load-balancer for VirtualMachines on bare metal Kubernetes clusters Load-balancer for virtual machines on bare metal Kubernetes clusters Apr 5, 2022
@RamLavi RamLavi force-pushed the vm_with_metal_lb branch from 48b93b9 to f2495f5 Compare April 5, 2022 11:53
@RamLavi
Copy link
Contributor Author

RamLavi commented Apr 5, 2022

/hold cancel

@kubevirt-bot kubevirt-bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 5, 2022
@RamLavi RamLavi force-pushed the vm_with_metal_lb branch from f2495f5 to dea5202 Compare April 5, 2022 12:27
Copy link
Contributor

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks very good, and is easy to understand.

The section about the ARP configuration mode is a bit convoluted IMO, and I could not follow it.

## Introducing MetalLB

[MetalLB](https://metallb.universe.tf/) allows you to create Kubernetes services of type `LoadBalancer`, and provides network load-balancer implementation in clusters that don’t run on a cloud provider, such as bare-metals.
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 working modes: Layer 2 and BGP modes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 working modes: Layer 2 and BGP modes:
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes: Layer 2 and BGP:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

This mode does not implement Load-balancing behavior provides a failover mechanism where one node owns the `LoadBalancer` service, until the node fails and another node is chosen. In a strict sense, this is not a real load-balancing behavior, but instead it makes the IPs reachable to the local network. This method announces the IPs in ARP (for Ipv4) and NDP (for Ipv6) protocols over the network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node.

- BGP mode
This mode provides a real load-balancing behavior, By establishing BGP peering sessions with the network routers. They in turn advertise the external IPs of the `LoadBalancer` service, distributing the load over the nodes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This mode provides a real load-balancing behavior, By establishing BGP peering sessions with the network routers. They in turn advertise the external IPs of the `LoadBalancer` service, distributing the load over the nodes.
This mode provides real load-balancing behavior, by establishing BGP peering sessions with the network routers - which advertise the external IPs of the `LoadBalancer` service, distributing the load over the nodes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

#### Installing MetalLB on the cluster

There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator). Note
that you will need `cluster-admin` privileges.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to have a section before with prerequirements ? Having this privilege would be one of them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator). Note
that you will need `cluster-admin` privileges.
You can confirm the operator is installed by entering the following command:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be preferable to block until it is in the correct phase, something like:

kubectl wait -nmy-metallb-operator csv metallb-operator.v0.12.0 \
  --for=jsonpath='{.status.phase}'=Succeeded --timeout=2m

Copy link
Contributor Author

@RamLavi RamLavi Apr 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually think both options are not so great. The reason is that after installing the MetalLB operator, it takes a few moments for the csv to appear, so you'll get

$ kubectl wait -nmy-metallb-operator csv metallb-operator.v0.12.0   --for=jsonpath='{.status.phase}'=Succeeded --timeout=2m
Error from server (NotFound): clusterserviceversions.operators.coreos.com "metallb-operator.v0.12.0" not found

Moreover, your suggestion will break as soon as they do a new operator release.

I think we should keep it, as the confirmation command kubectl get csv -n my-metallb-operator I mentioned actually comes from the MetalLB installation guide, so I figure they know best..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moreover, your suggestion will break as soon as they do a new operator release.

Using a variable would address that - and still, having the version of something as part of its name is .... not a smart move.

I think we should keep it, as the confirmation command kubectl get csv -n my-metallb-operator I mentioned actually comes from the MetalLB installation guide, so I figure they know best..

I'm just trying to make the reader's life easier: use a blocking command that unblocks when state == desiredState is more comfortable than repeating a command until the reader sees what's expected, but feel free to discard this opinion; I won't insist.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moreover, your suggestion will break as soon as they do a new operator release.

Using a variable would address that - and still, having the version of something as part of its name is .... not a smart move.

I think we should keep it, as the confirmation command kubectl get csv -n my-metallb-operator I mentioned actually comes from the MetalLB installation guide, so I figure they know best..

I'm just trying to make the reader's life easier: use a blocking command that unblocks when state == desiredState is more comfortable than repeating a command until the reader sees what's expected, but feel free to discard this opinion; I won't insist.

using the version is indeed... a choice... but it's not in our hands..
I do agree that wait is cooler, but still prefer the "get csv" option in this particular case. I will add a line that it may take a few seconds for the CSV to appear. Hope you find it ok by you.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fine.

EOF
```
> Notes:
> - If you're running a bare-metal cluster in a colocation factory, you need to first reserve this IP Address pool from your hosting provider. Alternatively, if you're running on a purely private cluster, you can use one of the private IP Address spaces (a.k.a RFC1918 addresses). Such addresses are free, and work fine as long as you’re only providing cluster services to your LAN.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is a colocation factory ? I would expect an explanation along the term whenever it is introduced.

I would also drop the word purely. It adds nothing imo.

Copy link
Contributor Author

@RamLavi RamLavi Apr 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's basically the opposite of a private cluster you spin up on a machine and you have controll over the entire network segment.
colocation factory means you share the network with others (clusters, hosts, etc..), and so you need to approach your (factory) network admin to give you a lease of IPs for metalLB to use.

An example of this is our cnv.eng cluster - you can't just grab a range of IPs and call it yours, you need to ask for a lease otherwise the DHCP can give it to one of the machines, and you'll be in trouble =) .

Copy link
Contributor Author

@RamLavi RamLavi Apr 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I rephrased and moved to prerequirements, tell me if it's more understandable now

Comment on lines 152 to 156
#!/bin/bash
echo "fedora" |passwd fedora --stdin
sudo yum install -y nginx
sudo systemctl enable nginx
sudo systemctl start nginx
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to see a cloud config instead of a bash script snippet:

Suggested change
#!/bin/bash
echo "fedora" |passwd fedora --stdin
sudo yum install -y nginx
sudo systemctl enable nginx
sudo systemctl start nginx
#cloud-config
password: fedora
chpasswd: { expire: False }
packages:
- nginx
runcmd:
- [ "systemctl", "enable", "--now", "nginx" ]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like! done

allocateLoadBalancerNodePorts: true
type: LoadBalancer
ipFamilyPolicy: SingleStack
sessionAffinity: None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This is the default; it can be omitted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only sessionAffinity?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

allocateLoadBalancerNodePorts also defaults to true: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#service-v1-core

If I were you, I'd check all of them are not the defaults.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack, thanks!

MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 working modes: Layer 2 and BGP modes:

- Layer 2 mode (ARP/NDP)
This mode does not implement Load-balancing behavior provides a failover mechanism where one node owns the `LoadBalancer` service, until the node fails and another node is chosen. In a strict sense, this is not a real load-balancing behavior, but instead it makes the IPs reachable to the local network. This method announces the IPs in ARP (for Ipv4) and NDP (for Ipv6) protocols over the network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This mode does not implement Load-balancing behavior provides a failover mechanism where one node owns the `LoadBalancer` service, until the node fails and another node is chosen. In a strict sense, this is not a real load-balancing behavior, but instead it makes the IPs reachable to the local network. This method announces the IPs in ARP (for Ipv4) and NDP (for Ipv6) protocols over the network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node.
This mode - which does not implement Load-balancing behavior - provides a failover mechanism where a single node owns the `LoadBalancer` service, until it fails, triggering another node to be chosen as service owner. This configuration mode makes the IPs reachable from the local network.
This method announces the IPs using ARP (for Ipv4) or NDP (for Ipv6) protocols over the network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface on the chosen node.

I am a bit confused about the second paragraph though ... I unfortunately cannot provide any assistance in re-writing it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rephrased, tell me what you think.

@RamLavi RamLavi force-pushed the vm_with_metal_lb branch 2 times, most recently from 08510b7 to 9a4ede2 Compare April 6, 2022 10:40
## Introduction

Over the last year, Kubevirt has integrated with MetalLB in order to support fault-tolerant access to an application through an external IP address.
As a Cluster administrator using an on-prem cluster without a network load balancer, now it's possible to add MetalLB operator and gain a load-balancer capabilities (with Services of type LoadBalancer) to virtual machines.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
As a Cluster administrator using an on-prem cluster without a network load balancer, now it's possible to add MetalLB operator and gain a load-balancer capabilities (with Services of type LoadBalancer) to virtual machines.
As a Cluster administrator using an on-prem cluster without a network load balancer, now it's possible to use MetalLB operator to provide load-balancer capabilities (with Services of type LoadBalancer) to virtual machines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


- You should have `cluster-admin` privileges on the cluster.
- IP Address allocation:
How you get IP address pools for MetalLB depends on your environment:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this sentence strangely positioned. Are you sure you want this to read:

IP Address allocation: How you get IP address pools for MetalLB depends on your environment: ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rephrased. what do you think?


There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator). Note
that you will need `cluster-admin` privileges.
You can confirm the operator is installed by entering the following command:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moreover, your suggestion will break as soon as they do a new operator release.

Using a variable would address that - and still, having the version of something as part of its name is .... not a smart move.

I think we should keep it, as the confirmation command kubectl get csv -n my-metallb-operator I mentioned actually comes from the MetalLB installation guide, so I figure they know best..

I'm just trying to make the reader's life easier: use a blocking command that unblocks when state == desiredState is more comfortable than repeating a command until the reader sees what's expected, but feel free to discard this opinion; I won't insist.

@RamLavi RamLavi force-pushed the vm_with_metal_lb branch from 9a4ede2 to f66c3e9 Compare April 7, 2022 08:19
Copy link
Contributor

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Short. concise, and useful.

Thank you.

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Apr 7, 2022
@RamLavi RamLavi force-pushed the vm_with_metal_lb branch from f66c3e9 to 19ff2b6 Compare April 7, 2022 09:22
@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 7, 2022
Comment on lines 54 to 57
- You should have `cluster-admin` privileges on the cluster.
- Getting IP Address pools allocation for MetalLB depends on your environment:
- If you're running a bare-metal cluster in a shared host environment, you need to first reserve this IP Address pool from your hosting provider.
- Alternatively, if you're running on a private cluster, you can use one of the private IP Address spaces (a.k.a RFC1918 addresses). Such addresses are free, and work fine as long as you’re only providing cluster services to your LAN.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried following the guide and this is where I got stuck.

I followed the Installation section, so now I have kind. But now the blog post speaks about cluster-admin and allocated pool. Are these requirements for kind? How do I apply them? Why is it a subsection of "Demo environment setup".

People should by able to follow through the blog-post simply by copy-pasting examples, either from here or from some other documentation.

Please stick to what is required to get the kind demo cluster running here (including the note from below "ttps://kind.sigs.k8s.io/docs/user/loadbalancer/"). If these pre-requirements are for a real (non-kind) cluster, it should be clearly stated and outside "Demo environment setup"

Copy link
Contributor Author

@RamLavi RamLavi May 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, these pre-requirements certainly do not fit here. Removed
Regarding the note - I will add only the necessary things needed for this demo.
Done.

- 172.18.1.1-172.18.1.16
EOF
```
> Note: Since this demo is using a kind cluster, we want this range to be on the docker kind network. For more information, see [link](https://kind.sigs.k8s.io/docs/user/loadbalancer/)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guide describes how to install MetalLB. It is confusing to list this under a section that waits for MetalLB to get installed

Copy link
Contributor Author

@RamLavi RamLavi May 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, this deserves a proper section. Done

@RamLavi RamLavi force-pushed the vm_with_metal_lb branch from 19ff2b6 to e1a8855 Compare May 1, 2022 11:44
### Demo environment setup

We are going to use [kind](https://kind.sigs.k8s.io) provider as an ephemeral Kubernetes cluster.
To start it up follow this [installation guide](https://kind.sigs.k8s.io/docs/user/quick-start/#installation).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be in the prerequirements section.

Also, could we change this to "First install kind on your machine following its installation guide". The installation is platform specific and may change, so it makes sense to refer an external source.

However, I would ask to list everything happening after installation here, starting with the commands starting a kind cluster.

If we split it like that, it would be clear "install kind however you want, then come back and we will prepare the environment"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Done.

Comment on lines +54 to +58
This demo runs Docker on Linux, which allows sending traffic directly to the load-balancer's external IP if the IP space is within the docker IP space.
On macOS and Windows however, docker does not expose the docker network to the host, rendering the external IP unreachable from other kind nodes. In order to workaround this, one could expose pods and services using extra port mappings as shown in the extra port mappings section of kind's [Configuration Guide](https://kind.sigs.k8s.io/docs/user/configuration#extra-port-mappings).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once the kind installation is moved to this section, could we move this paragraph to a another subsection "External IPs on macOS and Windows"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but I think it this subsection should remain under Prerequirement section.
Done.


This demo runs Docker on Linux, which allows sending traffic directly to the load-balancer's external IP if the IP space is within the docker IP space.
On macOS and Windows however, docker does not expose the docker network to the host, rendering the external IP unreachable from other kind nodes. In order to workaround this, one could expose pods and services using extra port mappings as shown in the extra port mappings section of kind's [Configuration Guide](https://kind.sigs.k8s.io/docs/user/configuration#extra-port-mappings).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Deploying cluster
... How to start kind

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

docker network inspect -f '{{.IPAM.Config}}' kind
```

You should get the subclass you can set the ip range from. The output should contain a cidr such as 172.18.1.0/16
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain how this pool relates to my docker installation, and is this different on macOS, windows?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pool uses the IP space provided by kind on docker, this is only relevant for Linux.
macOS, windows will need to set their own network, as mentioned on External IPs on macOS and Windows section.
I will rephrase to make that clearer, tell me what you think.


#### Setting Address Pool to be used by the LoadBalancer

In order to complete the Layer 2 mode configuration, we need to set a range of IP Addresses for the LoadBalancer to use. We want this range to be on the docker kind network, so by using this command:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In order to complete the Layer 2 mode configuration, we need to set a range of IP Addresses for the LoadBalancer to use. We want this range to be on the docker kind network, so by using this command:
In order to complete the Layer 2 mode configuration, we need to set a range of IP addresses for the LoadBalancer to use. We want this range to be on the docker kind network, so by using this command:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

docker network inspect -f '{{.IPAM.Config}}' kind
```

You should get the subclass you can set the ip range from. The output should contain a cidr such as 172.18.1.0/16
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You should get the subclass you can set the ip range from. The output should contain a cidr such as 172.18.1.0/16
You should get the subclass you can set the IP range from. The output should contain a cidr such as 172.18.1.0/16

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@RamLavi RamLavi force-pushed the vm_with_metal_lb branch 3 times, most recently from 2ada556 to 4ce0e1f Compare May 3, 2022 08:45
Copy link
Member

@phoracek phoracek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worked for me, some minor comments. I'm yet to re-review the text

kind create cluster
```

Then to start using the cluster:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is the following command using the cluster?

Copy link
Contributor Author

@RamLavi RamLavi May 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mmm.. well if you have more than 1 clusters, you need to choose the right one in order to interact with (see here)
let me rephrase it


#### Installing MetalLB on the cluster

There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator).
There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator) and click the Install button.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


#### Installing MetalLB on the cluster

There are many ways to install MetalLB. For the sake of this example, we will install MetalLB operator on the cluster. To do this, please follow this [link](https://operatorhub.io/operator/metallb-operator).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OLM 0.21.1 installation command fails for me on a fresh kind cluster with

The CustomResourceDefinition "clusterserviceversions.operators.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

strange.. lemme try installing again

Copy link
Contributor Author

@RamLavi RamLavi May 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I see it. OLM has an open issue operator-framework/operator-lifecycle-manager#2767 on this.
well it's not blocking or anything, so I'm on the fence here. Do you think we should install MetalLB a different way?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I stand corrected, it's not really working.. I'll look for another way to install MetalLB

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It failed for me on OLM 0.21.1, but 0.20.0 worked ok

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to installing via manifest.


### Access the virtual machine from outside the cluster

Finally, we can check that the nginx server is accessible from outside the cluster:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a note explaining that it may take a while before the VM becomes available? Or better, show how to wait for it to become available

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a note, though it is not related to the vm being ready. It looks like the url simply takes some time to work after setting the service.
@fedepaol do you know why this happens? I wonder if there is something we can "wait" upon being ready and not simply wait fr the url to work..

title: Load-balancer for virtual machines on bare metal Kubernetes clusters
description: This post illustrates setting up a virtual machine with MetalLb loadBalance service.
navbar_active: Blogs
pub-date: April 03
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's update this, in case we forget to update it later before publishing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@RamLavi RamLavi force-pushed the vm_with_metal_lb branch from 4ce0e1f to afeb7bb Compare May 4, 2022 04:47
layout: post
author: Ram Lavi
title: Load-balancer for virtual machines on bare metal Kubernetes clusters
description: This post illustrates setting up a virtual machine with MetalLb loadBalance service.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
description: This post illustrates setting up a virtual machine with MetalLb loadBalance service.
description: This post illustrates setting up a virtual machine with MetalLB loadBalance service.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

##### External IPs on macOS and Windows

This demo runs Docker on Linux, which allows sending traffic directly to the load-balancer's external IP if the IP space is within the docker IP space.
On macOS and Windows however, docker does not expose the docker network to the host, rendering the external IP unreachable from other kind nodes. In order to workaround this, one could expose pods and services using extra port mappings as shown in the extra port mappings section of kind's [Configuration Guide](https://kind.sigs.k8s.io/docs/user/configuration#extra-port-mappings).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would I then configure the address range in the MetalLB ConfigMap?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in short - the same.

This section deals not with the MetalLB ConfigMap, but with IP choosing considerations.
It's basically saying, if you're not using docker on linux, then you need to configure yourself the ip range is reachable from your pc console.
But once you made sure that IP is reachable, the configuring MetalLB ConfigMap is the same..

kind create cluster
```

In order to interact with the specific cluster created:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I still don't understand what are you suggesting here and how is calling cluster-info useful in the remainder of the post

Copy link
Contributor Author

@RamLavi RamLavi May 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what's confusing is kind's terminology for clusters.
when using kind, you may have more than 1 clusters to use:

$ kind get clusters
kind
kind-2

with kubectl cluster-info --context kind-kind you choose to work with the specific cluster you want kubectl to point to..
it's explained here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, so you are calling this to make sure that in case somebody changed the kind context before running this demo, we change it back to the implicit default name "kind-kind". Should set a specific context name while calling kind create cluster to make the connection clear?

docker network inspect -f '{{.IPAM.Config}}' kind
```

You should get the subclass you can set the IP range from. The output should contain a cidr such as 172.18.0.0/16
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You should get the subclass you can set the IP range from. The output should contain a cidr such as 172.18.0.0/16
You should get the subclass you can set the IP range from. The output should contain a cidr such as 172.18.0.0/16.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

EOF
```

#### Installing Kubevirt on the cluster
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you move this under "Installing components"? That way we would split the post into:

  1. Intro
  2. Prerequisites
  3. Cluster deployment
  4. Components deployment
  5. Network resources configuration
  6. Network utilization

So it would run from the lowest layer to the highest.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I understand you. #### Installing Kubevirt on the cluster is under ### Installing components, isn't it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevermind. Not sure what I meant here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I do:

  • Cluster deployment: this one is creating the kind cluster
  • Components deployment: this one is installing metalLB and KubeVirt in the cluster
  • Network resources configuration: this one is configuring the metalLB pool
  • Network utilization: this one is creating the VM and service

Does it make sense ? I share the concern @phoracek 's mentioned in https://github.com/kubevirt/kubevirt.github.io/pull/837/files#r864557203 ... That is part of the cluster configuration, and probably deserves to live in a separate section.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it. Done.
Let me know what you think

kubectl get pods -n metallb-system --watch
```

#### Setting Address Pool to be used by the LoadBalancer
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be a part of component installation

Copy link
Contributor Author

@RamLavi RamLavi May 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, but the problem is that then the nesting would be ##### Setting Address Pool to be used by the LoadBalancer which is not supported CSS style wise..

Over the last year, Kubevirt and MetalLB have shown to be powerful duo in order to support fault-tolerant access to an application on virtual machines through an external IP address.
As a Cluster administrator using an on-prem cluster without a network load-balancer, now it's possible to use MetalLB operator to provide load-balancer capabilities (with Services of type `LoadBalancer`) to virtual machines.

## Introducing MetalLB
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Introducing MetalLB
## MetalLB

The headline above already says "Introduction", saying it again (despite this is not nested) seems redundant

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

- First install kind on your machine following its [installation guide](https://kind.sigs.k8s.io/docs/user/quick-start/#installation).
- To use kind, you will also need to [install docker](https://docs.docker.com/install/).

##### External IPs on macOS and Windows
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is so nested that our website does not have a CSS style defined for it. What about removing the prerequirements headline and just keeping its list under the section above. Then you can remove one level of nesting on this one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like!

@RamLavi RamLavi force-pushed the vm_with_metal_lb branch 2 times, most recently from 50df1f2 to 762f416 Compare May 4, 2022 08:46
Copy link
Member

@phoracek phoracek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some typos. I'm ok with this PR once these are resolved

layout: post
author: Ram Lavi
title: Load-balancer for virtual machines on bare metal Kubernetes clusters
description: This post illustrates setting up a virtual machine with MetalLB loadBalance service.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
description: This post illustrates setting up a virtual machine with MetalLB loadBalance service.
description: This post illustrates setting up a virtual machine with MetalLB LoadBalancer service.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

## MetalLB

[MetalLB](https://metallb.universe.tf/) allows you to create Kubernetes services of type `LoadBalancer`, and provides network load-balancer implementation in on-prem clusters that don’t run on a cloud provider.
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes: Layer 2 and BGP:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes: Layer 2 and BGP:
MetalLB is responsible for assigning/unassigning an external IP Address to your service, using IPs from pre-configured pools. In order for the external IPs to be announced externally, MetalLB works in 2 modes, Layer 2 and BGP:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


- Layer 2 mode (ARP/NDP):

This mode - which actually does not implement real Load-balancing behavior - provides a failover mechanism where a single node owns the `LoadBalancer` service, until it fails, triggering another node to be chosen as the service owner. This configuration mode makes the IPs reachable from the local network.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This mode - which actually does not implement real Load-balancing behavior - provides a failover mechanism where a single node owns the `LoadBalancer` service, until it fails, triggering another node to be chosen as the service owner. This configuration mode makes the IPs reachable from the local network.
This mode - which actually does not implement real load-balancing behavior - provides a failover mechanism where a single node owns the `LoadBalancer` service, until it fails, triggering another node to be chosen as the service owner. This configuration mode makes the IPs reachable from the local network.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


#### Installing MetalLB on the cluster

There are [many ways](https://metallb.universe.tf/installation/) to install MetalLB. For the sake of this example, we will install MetalLB via Manifests. To do this, follow this [guide](https://metallb.universe.tf/installation/#installation-by-manifest).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
There are [many ways](https://metallb.universe.tf/installation/) to install MetalLB. For the sake of this example, we will install MetalLB via Manifests. To do this, follow this [guide](https://metallb.universe.tf/installation/#installation-by-manifest).
There are [many ways](https://metallb.universe.tf/installation/) to install MetalLB. For the sake of this example, we will install MetalLB via manifests. To do this, follow this [guide](https://metallb.universe.tf/installation/#installation-by-manifest).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

EOF
```

### Expose the virtual machine with a typed `LoaBalancer` service
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Expose the virtual machine with a typed `LoaBalancer` service
### Expose the virtual machine with a typed `LoadBalancer` service

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Done


### Spin up a Virtual Machine running Nginx

Now it's time to start-up a virtual machine running nginx using this yaml:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Now it's time to start-up a virtual machine running nginx using this yaml:
Now it's time to start-up a virtual machine running nginx using the following yaml.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

### Expose the virtual machine with a typed `LoaBalancer` service

When creating the `LoadBalancer` typed service, we need to remember annotating the address-pool we want to use
`addresspool-sample1` and also add the selector `metallb-service: nginx`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`addresspool-sample1` and also add the selector `metallb-service: nginx`
`addresspool-sample1` and also add the selector `metallb-service: nginx`:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

EOF
```

Notice that the service got assigned with an external IP from the range assigned by the PoolAddress:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Notice that the service got assigned with an external IP from the range assigned by the PoolAddress:
Notice that the service got assigned with an external IP from the range assigned by the address pool:

?

Copy link
Contributor Author

@RamLavi RamLavi May 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, now that we are not using the MetalLB operator, PoolAddress has no meaning.
Done

kubectl get service -n default metallb-nginx-svc
```

Example output
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Example output
Example output:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

curl -s -o /dev/null 172.18.1.1:5678 && echo "URL exists"
```

Example output
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Example output
Example output:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@RamLavi RamLavi force-pushed the vm_with_metal_lb branch from 762f416 to 9c84efd Compare May 11, 2022 12:17
@RamLavi
Copy link
Contributor Author

RamLavi commented May 11, 2022

@phoracek ready for your re-review

@phoracek
Copy link
Member

/lgtm

@maiqueb could you please take a final look?

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label May 12, 2022
Copy link
Contributor

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/hold

Holding in case you want to re-work the sections. You can remove the hold if you prefer @RamLavi .

EOF
```

#### Installing Kubevirt on the cluster
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I do:

  • Cluster deployment: this one is creating the kind cluster
  • Components deployment: this one is installing metalLB and KubeVirt in the cluster
  • Network resources configuration: this one is configuring the metalLB pool
  • Network utilization: this one is creating the VM and service

Does it make sense ? I share the concern @phoracek 's mentioned in https://github.com/kubevirt/kubevirt.github.io/pull/837/files#r864557203 ... That is part of the cluster configuration, and probably deserves to live in a separate section.

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label May 23, 2022
@RamLavi RamLavi force-pushed the vm_with_metal_lb branch from 9c84efd to 040c794 Compare May 23, 2022 09:51
@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label May 23, 2022
@maiqueb
Copy link
Contributor

maiqueb commented May 23, 2022

/lgtm
/hold cancel

Thanks.

@kubevirt-bot kubevirt-bot added lgtm Indicates that a PR is ready to be merged. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels May 23, 2022
Copy link
Member

@phoracek phoracek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: phoracek

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 23, 2022
@kubevirt-bot kubevirt-bot merged commit d995305 into kubevirt:main May 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. kind/blog Label for blog entries lgtm Indicates that a PR is ready to be merged. size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants