Skip to content

Discover existing snapshots on disk #161

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

ashish-amarnath
Copy link
Contributor

@ashish-amarnath ashish-amarnath commented Feb 28, 2020

Signed-off-by: Ashish Amarnath [email protected]

What type of PR is this?
/kind feature

What this PR does / why we need it:
This change lets this driver discover existing snapshots on disk and therefore improves testability of the scenario of "Importing an existing volume snapshot with Kubernetes"

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Adds ability to discover and import existing on-disk tarballs as snapshots.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Feb 28, 2020
@k8s-ci-robot
Copy link
Contributor

Welcome @ashish-amarnath!

It looks like this is your first PR to kubernetes-csi/csi-driver-host-path 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-csi/csi-driver-host-path has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @ashish-amarnath. Thanks for your PR.

I'm waiting for a kubernetes-csi member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Feb 28, 2020
@ashish-amarnath
Copy link
Contributor Author

ashish-amarnath commented Feb 29, 2020

This was verified on a 1.17 cluster.
Logs from testing:

I0228 23:26:55.219236       1 controllerserver.go:554] Enabling controller service capability: CREATE_DELETE_SNAPSHOT
I0228 23:26:55.219248       1 controllerserver.go:554] Enabling controller service capability: LIST_SNAPSHOTS
I0228 23:26:55.219263       1 controllerserver.go:554] Enabling controller service capability: CLONE_VOLUME
I0228 23:26:55.219275       1 controllerserver.go:554] Enabling controller service capability: EXPAND_VOLUME
I0228 23:26:55.219316       1 hostpath.go:129] discovering existing snapshots in /csi-data-dir
I0228 23:26:55.219369       1 hostpath.go:147] snapshot: /csi-data-dir/deadfeed.snap, id: deadfeed
I0228 23:26:55.219582       1 server.go:100] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0228

Inspecting storage objects

$ kc get volumesnapshotcontent
NAME                                READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER                VOLUMESNAPSHOTCLASS      VOLUMESNAPSHOT         AGE
snapcontent-hand-crafted-deadfeed   true         0             Delete           hostpath.csi.k8s.io   csi-hostpath-snapclass   velero-csi-pvc-9jwhg   31m

$ kcn csi-app get volumesnapshot
NAME                   READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT               RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                     CREATIONTIME   AGE
velero-csi-pvc-9jwhg   true                     snapcontent-hand-crafted-deadfeed   0                             snapcontent-hand-crafted-deadfeed   50y            30m

$ kcn csi-app get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS      REASON   AGE
pvc-063555a0-dc11-48f7-be58-b6d12c9da482   1Gi        RWO            Delete           Bound    csi-app/csi-pvc   csi-hostpath-sc            30m

$ kcn csi-app get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
csi-pvc   Bound    pvc-063555a0-dc11-48f7-be58-b6d12c9da482   1Gi        RWO            csi-hostpath-sc   30m

Inspecting data on the hostpathplugin driver's disk

kc exec -ti csi-hostpathplugin-0 -c hostpath sh
/ # cd /csi-data-dir/
/csi-data-dir # ls
74f924f9-5a82-11ea-8468-fe468a74e113  deadfeed.snap
/csi-data-dir # ls 74f924f9-5a82-11ea-8468-fe468a74e113/
deadfeed
/csi-data-dir # cksum 74f924f9-5a82-11ea-8468-fe468a74e113/deadfeed
242258054 1574432 74f924f9-5a82-11ea-8468-fe468a74e113/deadfeed
/csi-data-dir #

Inspecting the data on the pod using the csi-pvc

kcn csi-app exec -ti my-csi-app sh
/ # cksum /data/deadfeed
242258054 1574432 /data/deadfeed
/ #

@xing-yang
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 29, 2020
@xing-yang
Copy link
Contributor

This is very useful to test pre-provisioned snapshots.

Can you also add steps on how you manually create those snapshots?
This is "/kind feature". Need a release note.

@ashish-amarnath
Copy link
Contributor Author

ashish-amarnath commented Feb 29, 2020

@xing-yang I've added release notes in the PR description.
Wasn't sure where I should add the steps to create the snapshots.
Adding the steps here.

  1. Deploy the csi-hostpath-plugin dirver in a kubernetes 1.17 cluster (this test will use the v1beta1 snapshot apis to import snapshots into the cluster)
  2. Exec into the hostpath plugin driver container in the hostpath-plugin pod
$ kubect exec -ti csi-hostpathplugin-0 -c hostpath sh
  1. Now create a .snap file in the dataRoot directory to use as a volume snapshot. The hostpath plugin creates a tarballs of the volumes to represent snapshot.
/ # cd /csi-data-dir/
/csi-data-dir # mkdir foo
/csi-data-dir # while true; do echo -n DEADFEED >> ./foo/deadfeed; done
^C
/csi-data-dir #cksum foo/deadfeed
242258054 1574432 foo/deadfeed
/csi-data-dir # tar -cvzf ./deadfeed.snap -C ./foo .
./
./deadfeed
/csi-data-dir # ls
deadfeed.snap  foo
/csi-data-dir # rm -rf ./foo/
  1. Now restart the hostpath-plugin dirver pod to allow it to discover the newly created snapshot (deadfeed.snap)
  2. On restart of the driver, confirm from the logs that the snapshot created above have been recognized and imported by adding to the hostPathVolumeSnapshots map.
  3. Now to load this snapshot into a PV usable by a pod, we will create a volumesnapshotcontent, volumesnapshot and then a pvc using the volumesnapshot as the dataSource.
  • Create a namespace to house our test objects, called csi-app
  • Create a volumesnapshotcontent object
{
  "apiVersion": "snapshot.storage.k8s.io/v1beta1",
  "kind": "VolumeSnapshotContent",
  "metadata": {
    "name": "snapcontent-hand-crafted-deadfeed"
  },
  "spec": {
    "deletionPolicy": "Delete",
    "driver": "hostpath.csi.k8s.io",
    "source": {
      "snapshotHandle": "deadfeed"
    },
    "volumeSnapshotClassName": "csi-hostpath-snapclass",
    "volumeSnapshotRef": {
      "apiVersion": "snapshot.storage.k8s.io/v1beta1",
      "kind": "VolumeSnapshot",
      "name": "velero-csi-pvc-9jwhg",
      "namespace": "csi-app"
    }
  }
}
  • Create a volumesnapshot object. in the csi-app namespace, referring to the above created volumesnapshotcontent
{
  "apiVersion": "snapshot.storage.k8s.io/v1beta1",
  "kind": "VolumeSnapshot",
  "metadata": {
    "annotations": {
      "velero.io/backup-name": "csi-app-backup-1"
    },
    "generateName": "velero-csi-pvc-",
    "generation": 1,
    "name": "velero-csi-pvc-9jwhg",
    "namespace": "csi-app"
  },
  "spec": {
    "source": {
      "volumeSnapshotContentName": "snapcontent-hand-crafted-deadfeed"
    }
  }
}
  • Create a pvc, in the csi-app namespace, that uses the above created volumesnapshot as its dataSource
{
  "apiVersion": "v1",
  "kind": "PersistentVolumeClaim",
  "metadata": {
    "annotations": {
      "velero.io/backup-name": "csi-app-backup-1",
      "velero.io/volume-snapshot-name": "velero-csi-pvc-9jwhg"
    },
    "name": "csi-pvc",
    "namespace": "csi-app"
  },
  "spec": {
    "storageClassName": "csi-hostpath-sc",
    "dataSource": {
      "name": "velero-csi-pvc-9jwhg",
      "kind": "VolumeSnapshot",
      "apiGroup": "snapshot.storage.k8s.io"
    },
    "accessModes": [
      "ReadWriteOnce"
    ],
    "resources": {
      "requests": {
        "storage": "1Gi"
      }
    }
  }
}
  • On doing this, the provisioner, along with the hostpath plugin driver will provision a pv and load it with the data from the snapshot, datafeed.snap
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS      REASON   AGE
pvc-063555a0-dc11-48f7-be58-b6d12c9da482   1Gi        RWO            Delete           Bound    csi-app/csi-pvc   csi-hostpath-sc            8h
  • Create a pod that uses this pvc to inspect the data in the pvc from examples/csi-app.yaml
kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: my-csi-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-csi-volume
      persistentVolumeClaim:
        claimName: csi-pvc # defined in csi-pvc.yaml

Create the above pod and exec into the pod and inspect contents of /data

$ kubectl csi-app exec -ti my-csi-app sh
/ # cksum /data/deadfeed
242258054 1574432 /data/deadfeed
/ #

The checksums of the deadfeed file volumemounted by the my-csi-app pod matches that of the the file that was created as part of setting up the snapshot.

Verify the state of the kubernetes storage objects

  • volumesnapshot and pvc
$ kubectl -n csi-app get volumesnapshot,pvc
NAME                                                          READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT               RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                     CREATIONTIME   AGE
volumesnapshot.snapshot.storage.k8s.io/velero-csi-pvc-9jwhg   true                     snapcontent-hand-crafted-deadfeed   0                             snapcontent-hand-crafted-deadfeed   50y            8h

NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
persistentvolumeclaim/csi-pvc   Bound    pvc-063555a0-dc11-48f7-be58-b6d12c9da482   1Gi        RWO            csi-hostpath-sc   8h
  • volumesnapshotcontents and pv
$ kubectl get volumesnapshotcontents,pv
NAME                                                                              READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER                VOLUMESNAPSHOTCLASS      VOLUMESNAPSHOT         AGE
volumesnapshotcontent.snapshot.storage.k8s.io/snapcontent-hand-crafted-deadfeed   true         0             Delete           hostpath.csi.k8s.io   csi-hostpath-snapclass   velero-csi-pvc-9jwhg   8h

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS      REASON   AGE
persistentvolume/pvc-063555a0-dc11-48f7-be58-b6d12c9da482   1Gi        RWO            Delete           Bound    csi-app/csi-pvc   csi-hostpath-sc            8h

This above test demonstrates the ability of this driver to import existing snapshots and provision volumes using the data in the volumesnapshot. Thus verifying this PR.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Feb 29, 2020
@ashish-amarnath ashish-amarnath changed the title [WIP] Discover existing snapshots on disk Discover existing snapshots on disk Feb 29, 2020
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 29, 2020
@xing-yang
Copy link
Contributor

@ashish-amarnath For release note, you only need:
Adds ability to discover and import existing on-disk tarballs as snapshots.

The PR info will be taken care when this driver is released.
It's fine to add the steps here in this PR. We can decide where to document this later.

@xing-yang
Copy link
Contributor

On doing this, the snapshot controller along with the hostpath plugin driver will provision a pv and load it with the data from the snapshot, datafeed.tgz

It should be the provisioner not the snapshot controller that provisions the PV.

@xing-yang
Copy link
Contributor

  1. Now to create PVs using this snapsthot

s/snapsthot/snapshot

You are only creating PV that uses the snapshot as data source, but not creating the source PV/PVC for this snapshot, right?

@ashish-amarnath
Copy link
Contributor Author

@xigang

It should be the provisioner not the snapshot controller that provisions the PV.
Thanks for the correction 😄 I've updated the comment.

You are only creating PV that uses the snapshot as data source, but not creating the source PV/PVC for this snapshot, right?

I am creating volumesnapshotcontent, volumesnapshot and then the pvc using the volumesnapshot as the dataSource.

I've also updated the comment w/ this clarification.

@xing-yang
Copy link
Contributor

Can you rework this based on this change? #162

@ashish-amarnath ashish-amarnath requested a review from msau42 March 4, 2020 22:30
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Mar 5, 2020
@ashish-amarnath ashish-amarnath force-pushed the discover-snapshots branch 2 times, most recently from ff87f17 to 0c79a04 Compare March 5, 2020 20:14
@xing-yang
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. and removed lgtm "Looks good to me", indicates that a PR is ready to be merged. labels Mar 5, 2020
Copy link
Collaborator

@msau42 msau42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for adding comprehensive unit tests! Just some minor comments

@ashish-amarnath ashish-amarnath force-pushed the discover-snapshots branch 2 times, most recently from ce42aad to c3cd442 Compare March 6, 2020 22:17
@xing-yang
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 9, 2020
@msau42
Copy link
Collaborator

msau42 commented Mar 10, 2020

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ashish-amarnath, msau42, xing-yang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 10, 2020
@k8s-ci-robot k8s-ci-robot merged commit bf1107b into kubernetes-csi:master Mar 10, 2020
@ashish-amarnath ashish-amarnath deleted the discover-snapshots branch March 10, 2020 22:35
TerryHowe pushed a commit to TerryHowe/csi-driver-host-path that referenced this pull request Oct 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants