Skip to content

Dynamic Flexvolume plugin discovery proposal. #833

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Aug 22, 2017
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
148 changes: 148 additions & 0 deletions contributors/design-proposals/flexvolume-deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
# **Automated Flexvolume Deployment**

## **Objective**

Automate the deployment of Flexvolume drivers with the following goals:
* Drivers must be deployed on nodes (and master when attach is required) without having to manually access any machine instance.
* Kubelet and controller-manager do not need to be restarted manually in order for the new plugin to be recognized.

## **Background**

Beginning in version 1.8, the Kubernetes Storage SIG is putting a stop to accepting in-tree volume plugins and advises all storage providers to implement out-of-tree plugins. Currently, there are two recommended implementations: Container Storage Interface (CSI) and Flexvolume.

[CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md) provides a single interface that storage vendors can implement in order for their storage solutions to work across many different container orchestrators, and volume plugins are out-of-tree by design. This is a large effort, the full implementation of CSI is several quarters away, and there is a need for an immediate solution for storage vendors to continue adding volume plugins.

[Flexvolume](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) is an in-tree plugin that has the ability to run any storage solution by executing volume commands against a user-provided driver on the Kubernetes host, and this currently exists today. However, the process of setting up Flexvolume is very manual, pushing it out of consideration for many users. Problems include having to copy the driver to a specific location in each node, manually restarting kubelet, and user's limited access to machines.


## **Overview**


### User Story

Driver Installation:

* Alice is a storage plugin author and would like to deploy a Flexvolume driver on all node instances. She
1. prepares her Flexvolume driver directory, with driver names in `[vendor~]driver/driver` format (e.g. `flexvolume/k8s~nfs/nfs`, see [Flexvolume documentation](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites)).
2. creates an image by copying her driver to the Flexvolume deployment base image at `/flexvolume`.
Copy link
Member

@liggitt liggitt Jul 25, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might simplify this proposal to focus it on dynamic discovery of new plugins, and reprobing of plugins from the kubelet/controllermanager, etc. Deployment via daemonset could be one mechanism, but shouldn't be prescriptive.

3. makes her image available Bob, a cluster admin.
* Bob modifies the existing deployment DaemonSet spec with the name of the given image, and creates the DaemonSet.
* Charlie, an end user, creates volumes using the installed plugin.

The user story for driver update is similar: Alice creates a new image with her new drivers, and Bob deploys it using the DaemonSet spec.

Note that the `/flexvolume` directory must look exactly like what is desired in the Flexvolume directory on the host (as described in the [Flexvolume documentation](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites)). The deployment will replace the existing driver directory on the host with contents in `/flexvolume`. Thus, in order to add a new driver without removing existing ones, existing drivers must also appear in `/flexvolume`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in order to add a new driver without removing existing ones, existing drivers must also appear in /flexvolume

that is unexpected. I'd expect a flexvolume deployment to be focused on installing a single driver, e.g. populating one driver under /usr/libexec/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver>

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think it has to be possible to install multiple of these



### High Level Design

The DaemonSet mounts a hostpath volume exposing the host's Flexvolume driver directory onto every pod. The base deployment image contains a script that copies drivers in the image to the hostpath. A notification is then sent to the filesystem watch from kubelet or controller manager. During volume creation, if there is a signal from the watch, kubelet or controller manager probes the driver directory and loads currently installed drivers as volume plugins.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does the flexvolume content have to be copied to every pod? Assuming non-containarized kubelet, shouldn't this just be on host?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to say every DaemonSet pod. Will clarify.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Daemonset has to be privileged to copy the driver.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chakri-nelluri Could you clarify which exactly operations require privileged mode?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@php-coder Kubelet executes Flexvolume drivers outside the container's namespace, so driver files have to be copied into the hostpath volume in privileged mode.



## **Detailed Design**
Copy link
Member

@jsafrane jsafrane Jul 28, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(a random point in the proposal to start a new separate thread so the discussion is organized)

Mount propagation

I am preparing mount propagation for release 1.8 + possibility to run mount utilities (mount.glusterfs, /usr/bin/rbd, flex drivers, ...) in pods instead of running them on the host. Initially, I planned not to support flex volumes in the first release and get the implementation solid first, but now that there is someone else preparing dynamic probing of flex plugin it should not be hard to extend it.

See #589 for full details.

tl;dr, the proposal expects that pod with mount utilities will put a unix domain socket into /var/lib/kubelet/plugin-sockets/<plugin name>, e.g. /var/lib/kubelet/plugin-sockets/kubernetes.io/glusterfs. That can be easily extended to flex volumes:

  • Flex volume author puts all necessary utilities, usual /usr/libexec/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver> and a volume-exec daemon (shipped by Kubernetes) into an image.
  • System admin runs the image as a DaemonSet with privileged pods that have shared mount-propagation on /var/lib/kubelet from the pod to the host. Pods in the daemon set run volume-exec daemon with proper parameters (namely the volume plugin name).
    • volume-exec daemon puts an unix socket into /var/lib/kubelet/plugin-sockets/<vendor>~<driver>/<driver> on the host.
  • Probe in kubelet scans plugin-sockets and registers a new flex volume plugin for every socket it finds there. This is the same as discovery of new drivers as designed in this proposal.
  • When kubelet wants to call the driver, it checks if plugin-sockets/<vendor>~<driver>/<driver> socket exists.
    • If if the socket does not exist, it uses plain old os.Exec to execute the driver.
    • If the socket exists, it uses a gRPC API provided by volume-exec to execute stuff in the pod that runs volume-exec. As result, /usr/libexec/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver> is executed in the pod. Due to shared mount propagation, the driver can mount stuff and kubelet will see it.
    • (all this is already part of Proposal: containerized mount utilities in pods #589, there will be very little changes to flex volume implementation).

Installation of a driver is quite simple, no need to copy the driver from the pod to the host, creating one socket is enough. Upgrade of the daemon set is more complicated though, as any fuse daemons now run in the pod and not on the host. If the pod is killed during the update all volumes that it served are unmounted. See #589 for details.

To sum it up, if the probe as suggested in this proposal is implemented, it should be fairly easy to extend it to scan also for the sockets. That's the only necessary change and we get completely containerized flex volumes. Question is if we then need to mess up with copying drivers from pods to the host as proposed here. On the other way, #589 is just a proposal and it may be changed and who knows if it catches 1.8 and in what shape.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be cool. Can consider it for 1.9


### Public Deployment Base Image
Composed of the busybox image and the deployment script described in [Driver Installation Script](#driver-installation-script).

### Copying Driver File to Image

Using the deployment base image, the plugin author copies the Flexvolume driver directory (e.g. `flexvolume/k8s~nfs/nfs`) to `/flexvolume` and makes the image available to the cluster admin.


### Driver Installation Script

The script will copy the existing content of `/flexvolume` on the host to a location in `/tmp`, and then attempt to copy user-provided drivers to that directory. If the copy fails, the original drivers are restored. This script will not perform any driver validation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are failures shown to the user?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what's the best way. I'll add that to Open Questions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use the exit status from Daemonset to notify the error?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the "are restored" ? A driver should never ever touch anything that isn't it's own.

The install process, for most drivers, should be:

while true; do 
  if [ -f /flexvolume/myvendor~mydriver/mydriver ]; then
    sleep 60
  else
    mkdir /flexvolume/myvendor~mydriver
    cp -a /mydriver /flexvolume/myvendor~mydriver
  fi
done

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model in the current design is the entire Flexvolume directory containing all drivers get replaced, so when a copy fails, the backup of the entire directory is restored.

After moving to the single-driver install model, this will be changed to backing up a single driver instead.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the hard line is at "you should never ever touch something you don't own".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, what happens if the probe fails or the plugin fails to initialize?
We might have to document how to revert to an working older version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add this is a recommended way. Users are free to innovate :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're planning to provide a driver installation mechanism as part of this proposal, but like @liggitt mentioned, it can be separated out.


### Deployment DaemonSet
``` yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: flex-set
spec:
template:
metadata:
name: flex-deploy
labels:
app: flex-deploy
spec:
containers:
- image: <deployment_image>
name: flex-deploy
volumeMounts:
- mountPath: /flexmnt
name: flexvolume-mount
volumes:
- name: flexvolume-mount
hostPath:
path: <host_driver_directory>
```

### Dynamic Plugin Discovery

In the volume plugin code, introduce a `PluginStub` interface containing a single method `Init()`, and have `VolumePlugin` extend it. Create a `PluginProber` type which extends `PluginStub` and includes methods `Init()` and `Probe()`.

`Init()` initializes fsnotify, creates a watch on the driver directory as well as its subdirectories (if any), and spawn a goroutine listening to the signal. When the goroutine receives signal that a new directory is created, create a watch for the directory so that driver changes can be seen.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why use notify, if you are just going to cache the results?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is discussed in Alternative Design (3).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If probe() were called a reasonable amount of times per second, would you reconsider this point? Just trying to bound complexity.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes definitely. Unfortunately I was seeing bursts of tens of milliseconds between calls. If we change the Find*() logic so that if there's already a match then don't check Flex, then we don't need a watch.


`Probe()` scans the driver directory only when the goroutine sets a flag. If the flag is set, return true (indicating that new plugins are available) and the list of plugins. Otherwise, return false and nil. After the scan, the watch is refreshed to include the new list of subdirectories. The goroutine should only record a signal if there has been a 1-second delay since the last signal (see [Security Considerations](#security-considerations)). Because inotify (used by fsnotify) can only be used to watch an existing directory, the goroutine needs to maintain the invariant that the driver directory always exists.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, mere presence of a new directory means that there is a new flex driver there? File copy is not an atomic operation and it may happen that only part of the driver script was copied there. You should wait until whole driver has been copied there. And big question is how do you recognize that...

IMO, installation of a new driver (or driver update) should be atomic operation on the fs, e.g. link(2).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jan, excellent point. The only atomic file op is rename, so we should be sure that flex EXPLICITLY ignores files that start with a .. The installer must copy to flex/.../.mydriver and then rename that to mydriver. Let's make this as explicit as possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That assumes that the driver is a single file. The installer must rename multiple files one by one (and rename does not work well with directories).

I am ok with requiring the driver to be a single file, with no helper scripts around, but it must be clearly defined somewhere in flex documentation and a release note.


Inside `InitPlugins()` from `volume/plugins.go`, if the `PluginStub` is an instance of `PluginProber`, only call its `Init()` and nothing else. Add an additional field, `flexVolumePluginList`, in `VolumePluginMgr` as a cache. For every iteration of the plugin list, call `Probe()` and update `flexVolumePluginList` if true is returned, and iterate through the new plugin list. If the return value is false, iterate through the existing `flexVolumePluginList`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the PluginStub is an instance of PluginProber

I thought both PluginStub and PluginProber were interfaces. Do you mean if plugin in InitPlugins loop is an instance of PluginProber ? I assume all Volume plugins will be instance of PluginStub since it would be the type that would be extended by VolumePlugin.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, so the idea is, each time any function on VolumePluginMgr is executed, we call Probe on plugin and if that returns true we rescan the directories and return with updated plugin list. I am wondering, if we have can simply rescan the directory always whenever a call to VolumePluginMgr is made, that is keep the polling on-demand rather than periodic and we won't need inotify etc hook.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean if plugin in InitPlugins loop is an instance of PluginProber ? I assume all Volume plugins will be instance of PluginStub since it would be the type that would be extended by VolumePlugin.

Yes that's correct. Will clarify.

I am wondering, if we have can simply rescan the directory always whenever a call to VolumePluginMgr is made

This is discussed in Alternative Designs (2).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think anything related to flex should leak outside the flex directory. It should just be part of the initial handshake to figure out if each thing is a plugin or a meta-plugin. Meta-plugins are probed later.


Because Flexvolume has two separate plugin instantiations (attachable and non-attachable), it's worth considering the case when a driver that implements attach/detach is replaced with a driver that does not, or vice versa. This does not cause an issue because plugins are recreated every time the driver directory is changed.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the flexvolume implements attach/detach interface, how are you going to extend the controller-manager (assuming you are using a daemonset)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by "extend"?

If you are asking how the drivers get deployed through DaemonSet to a master: DaemonSet will add a pod to the master node regardless of whether the node is set to schedulable.

If you are asking how the controller-manager picks up the newly added attach/detach procedures when the driver is replaced: during plugin probe a FlexVolumeAttachablePlugin is created, which replaces the previous plugin. The attachable plugin will enable attach/detach calls from AttachDetachController.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DaemonSet will add a pod to the master node

Not all masters run kubelet. E.g. GKE.


There is a possibility that a probe occurs at the same time the DaemonSet updates the driver, so the prober's view of drivers is inconsistent. However, this is very rare and when it does occur, the next `Probe()`call, which occurs shortly after, will be consistent.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for this approach.



## **Alternative Designs**

1) Make `PluginProber` a separate component, and pass it around as a dependency.

Pros: Avoids the common `PluginStub` interface. There isn't much shared functionality between `VolumePlugin` and `PluginProber`. The only purpose this shared abstraction serves is for `PluginProber` to reuse the existing machinery of plugins list.

Cons: Would have to increase dependency surface area, notably `KubeletDeps`.

I'm currently undecided whether to use this design or the `PluginStub` design.

2) Use a polling model instead of a watch for probing for driver changes.

Pros: Simpler to implement.

Cons: Kubelet or controller manager iterates through the plugin list many times, so Probe() is called very frequently. Using this model would increase unnecessary disk usage. This issue is mitigated if we guarantee that `PluginProber` is the last `PluginStub` in the iteration, and only `Probe()` if no other plugin is matched, but this logic adds additional complexity.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Flexvolume drivers are not something that will be added or modified often.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is probe called so often?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's called for every FindPluginBy*(), because currently it iterates through all plugins and errors out when multiple plugins are found. As for why FindPluginBy*() is called often, I don't know. I can only tell that from logs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should track that backwards - it is surprising to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like there are (at least) two reasons:

  1. In every DSW populator loop, it's called for volumes of every existing pod, in order to re-associate each pod with its required volumes.

  2. Certain volume types are set to constantly remount, triggering a Probe every time the remount occurs.


3) Use a polling model + cache. Poll every x seconds/minutes.

Pros: Mostly mitigates issues with the previous approach.

Cons: Depending on the polling period, either it's needlessly frequent, or it's too infrequent to pick up driver updates quickly.

4) Using Jobs instead of DaemonSets to deploy.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deployment method should be a separated out of this proposal, IMO. The core need is for kubelet and controller manager to detect and use added flex plugins without restart. If that requirement is met, you can use all sorts of methods for installing/maintaining the drivers.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. We should provide an example. That's all.


Pros: Designed for containers that eventually terminate. No need to have the container go into an infinite loop.

Cons: Does not guarantee every node has a pod running. Pod anti-affinity can be used to ensure no more than one pod runs on the same node, but since the Job spec requests a constant number of pods to run to completion, Jobs cannot ensure that pods are scheduled on new nodes.

5) Have the `flexVolumePluginList` cache live in `PluginProber` instead of `VolumePluginMgr`.

Pros: `VolumePluginMgr` doesn't need to treat Flexvolume plugins any differently from other plugins.

Cons: `PluginProber` doesn't have the function to validate a plugin. This function lives in `VolumePluginMgr`. Alternatively, the function can be passed into `PluginProber`.


## **Security Considerations**
Copy link
Contributor

@php-coder php-coder Jul 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we also mention how it's supposed to work with Pod Security Policy? Which actions will be required? Do we need to create a special policy maybe?

Copy link
Member

@liggitt liggitt Jul 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure that level of detail is required... you'd need whatever permissions are required to mount and write files to a directory in the kubelet's flexvolume driver dir. Again, I think this proposal should focus on the kubelet/controllermanager aspects, and the mechanism for ensuring atomic load of drivers (write to dot-prefixed dir or file, rename or symlink+rename, etc), over delivery mechanisms.

Do we need to create a special policy maybe?

PSP setup is going to vary by install. Documenting how to grant permission to use hostPath volume mounts is already included in https://kubernetes.io/docs/concepts/policy/pod-security-policy/#controlling-volumes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you'd need whatever permissions are required to mount and write files to a directory in the kubelet's flexvolume driver dir

That's what I wanted to see here. If it would be explicitly mentioned, it will be good.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the necessary security policy in the example Deployment DaemonSet spec.


The Flexvolume driver directory can be continuously modified (accidentally or maliciously), making every` Probe()` call trigger a disk read, and `Probe()` calls could happen every couple of milliseconds and in bursts (i.e. lots of calls at first and then silence for some time). This may decrease kubelet's or controller manager's disk IO usage, impacting the performance of other system operations.

As a safety measure, add a 1-second minimum delay between the processing of filesystem watch signals.


## **Testing Plan**

Add new unit tests in `plugin_tests.go` to cover new probing functionality and the heterogeneous plugin types in the plugins list.

Add e2e tests that follow the user story. Write one for initial driver installation, one for an update for the same driver, one for adding another driver, and one for removing a driver.

## **Open Questions**

* How does this system work with containerized kubelet?
* What if drivers are updated while Flexvolume plugin is executing commands?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very important question and I'd like to see some answer. Especially kubelet must not ever execute a half-copied driver - running partly copied shell script as root sounds scary to me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename, as described above, should not suffer this problem, I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than the scenario Jan mentioned, there are two other possible bad scenarios:

  1. When a driver command is called, part of the driver command is loaded into memory and is being executed. The driver gets modified, and the remaining part of the command (which is new) gets executed. I'm not sure if this could happen.
  2. The driver gets modified while the command is read into memory, before execution even occurs. This could definitely happen.

I don't think renaming solves either of these issues. A write should never happen while an execution is underway. Ideally the solution fixes both the issues above and Jan's scenario.

My first thought is a RWLock implementation, but I'd like to avoid locking if possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. should get an ETXTBUSY for the person trying to modify the binary

I don't understand 2?

In any case, rename is atomic. As soon as I open the file descriptor, if anyone renames another file over this file, I still get the old file. It is pinned until I close that fd. Happy to explain more - it's a good trick to have in your toolbox.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's really neat. In that case yes renaming will solve both of these scenarios.

What I tried to describe in (2) is: before the driver gets executed, part of its commands are read into memory, then the remaining commands are modified, and lastly the modified commands are read in.

* If DaemonSet deployment fails, how are errors shown to the user?