Skip to content

CreateVolume respects the topology requirements of the node #200

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 42 additions & 26 deletions pkg/sanity/controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -1118,6 +1118,21 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {

It("should return appropriate values (no optional values added)", func() {

By("getting node information")
ni, err := n.NodeGetInfo(
context.Background(),
&csi.NodeGetInfoRequest{})
Expect(err).NotTo(HaveOccurred())
Expect(ni).NotTo(BeNil())
Expect(ni.GetNodeId()).NotTo(BeEmpty())

var accReqs *csi.TopologyRequirement
if ni.AccessibleTopology != nil {
accReqs = &csi.TopologyRequirement{
Requisite: []*csi.Topology{ni.AccessibleTopology},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assumes that the storage backend can support a one-node topology, but may not be the case for something that requires say a min of 3 nodes for replication.

I'm unsure what would be a great way to support that in the test though. This is probably ok for now, but it might be good to add a comment about the assumption.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

csi-sanity essentially simulates a CO with a single node, so I think this code change is consistent with the rest of the testing.

@alexanderKhaustov can you add a short comment here that explains that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay
I'm not sure I understand the potentially problematic scenario.
The SP is returning topology information for a node, but then it doesn't support provisioning a volume with this topology, because of a requirement (existing only externally to csi plugin/spec?) that a volume can only be created if there are no less than 3 nodes with some specific topology?
If that is the case then it seems that such an external peculiarity is hardly within the scope of csi-test, is it not? (csi-test aside, it seems that such a plugin would also perform unexpectedly for a CO end-user, would it not?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexanderKhaustov: I agree. Let me try to propose a comment that you can put above the new code and then if @msau42 has no additional comments we can merge it:

csi-sanity testing emulates the behavior of a CO with a single node. The name and topology of that node is provided by the driver itself. Here we ensure that the new volume gets provisioned on that node, because otherwise staging it on the node later in the test might fail.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be good to add a note that this test does not support storage backends that require more than 1 node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've checked the spec more thoroughly, seems like there are no requirements for the setups that a SP MUST support. So I added comments about test assumptions.

As an aside, with regard to the topology returned by NodeGetInfo the spec suprisingly states that

COs MAY use this [topology] information ... when scheduling workloads

(https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo)
rather than MUST use which seems more reasonable. Do you know any reason for that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know why topology information is allowed to be ignored.

}
}

// Create Volume First
By("creating a single node writer volume")
name := UniqueString("sanity-controller-publish")
Expand All @@ -1136,8 +1151,9 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {
},
},
},
Secrets: sc.Secrets.CreateVolumeSecret,
Parameters: sc.Config.TestVolumeParameters,
Secrets: sc.Secrets.CreateVolumeSecret,
Parameters: sc.Config.TestVolumeParameters,
AccessibilityRequirements: accReqs,
},
)
Expect(err).NotTo(HaveOccurred())
Expand All @@ -1146,22 +1162,14 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {
Expect(vol.GetVolume().GetVolumeId()).NotTo(BeEmpty())
cl.RegisterVolume(name, VolumeInfo{VolumeID: vol.GetVolume().GetVolumeId()})

By("getting a node id")
nid, err := n.NodeGetInfo(
context.Background(),
&csi.NodeGetInfoRequest{})
Expect(err).NotTo(HaveOccurred())
Expect(nid).NotTo(BeNil())
Expect(nid.GetNodeId()).NotTo(BeEmpty())

// ControllerPublishVolume
By("calling controllerpublish on that volume")

conpubvol, err := c.ControllerPublishVolume(
context.Background(),
&csi.ControllerPublishVolumeRequest{
VolumeId: vol.GetVolume().GetVolumeId(),
NodeId: nid.GetNodeId(),
NodeId: ni.GetNodeId(),
VolumeCapability: &csi.VolumeCapability{
AccessType: &csi.VolumeCapability_Mount{
Mount: &csi.VolumeCapability_MountVolume{},
Expand All @@ -1175,7 +1183,7 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {
},
)
Expect(err).NotTo(HaveOccurred())
cl.RegisterVolume(name, VolumeInfo{VolumeID: vol.GetVolume().GetVolumeId(), NodeID: nid.GetNodeId()})
cl.RegisterVolume(name, VolumeInfo{VolumeID: vol.GetVolume().GetVolumeId(), NodeID: ni.GetNodeId()})
Expect(conpubvol).NotTo(BeNil())

By("cleaning up unpublishing the volume")
Expand All @@ -1185,7 +1193,7 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {
&csi.ControllerUnpublishVolumeRequest{
VolumeId: vol.GetVolume().GetVolumeId(),
// NodeID is optional in ControllerUnpublishVolume
NodeId: nid.GetNodeId(),
NodeId: ni.GetNodeId(),
Secrets: sc.Secrets.ControllerUnpublishVolumeSecret,
},
)
Expand Down Expand Up @@ -1477,6 +1485,21 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {
By("creating a single node writer volume")
name := UniqueString("sanity-controller-unpublish")

By("getting node information")
ni, err := n.NodeGetInfo(
context.Background(),
&csi.NodeGetInfoRequest{})
Expect(err).NotTo(HaveOccurred())
Expect(ni).NotTo(BeNil())
Expect(ni.GetNodeId()).NotTo(BeEmpty())

var accReqs *csi.TopologyRequirement
if ni.AccessibleTopology != nil {
accReqs = &csi.TopologyRequirement{
Requisite: []*csi.Topology{ni.AccessibleTopology},
}
}

vol, err := c.CreateVolume(
context.Background(),
&csi.CreateVolumeRequest{
Expand All @@ -1491,8 +1514,9 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {
},
},
},
Secrets: sc.Secrets.CreateVolumeSecret,
Parameters: sc.Config.TestVolumeParameters,
Secrets: sc.Secrets.CreateVolumeSecret,
Parameters: sc.Config.TestVolumeParameters,
AccessibilityRequirements: accReqs,
},
)
Expect(err).NotTo(HaveOccurred())
Expand All @@ -1501,22 +1525,14 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {
Expect(vol.GetVolume().GetVolumeId()).NotTo(BeEmpty())
cl.RegisterVolume(name, VolumeInfo{VolumeID: vol.GetVolume().GetVolumeId()})

By("getting a node id")
nid, err := n.NodeGetInfo(
context.Background(),
&csi.NodeGetInfoRequest{})
Expect(err).NotTo(HaveOccurred())
Expect(nid).NotTo(BeNil())
Expect(nid.GetNodeId()).NotTo(BeEmpty())

// ControllerPublishVolume
By("calling controllerpublish on that volume")

conpubvol, err := c.ControllerPublishVolume(
context.Background(),
&csi.ControllerPublishVolumeRequest{
VolumeId: vol.GetVolume().GetVolumeId(),
NodeId: nid.GetNodeId(),
NodeId: ni.GetNodeId(),
VolumeCapability: &csi.VolumeCapability{
AccessType: &csi.VolumeCapability_Mount{
Mount: &csi.VolumeCapability_MountVolume{},
Expand All @@ -1530,7 +1546,7 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {
},
)
Expect(err).NotTo(HaveOccurred())
cl.RegisterVolume(name, VolumeInfo{VolumeID: vol.GetVolume().GetVolumeId(), NodeID: nid.GetNodeId()})
cl.RegisterVolume(name, VolumeInfo{VolumeID: vol.GetVolume().GetVolumeId(), NodeID: ni.GetNodeId()})
Expect(conpubvol).NotTo(BeNil())

// ControllerUnpublishVolume
Expand All @@ -1541,7 +1557,7 @@ var _ = DescribeSanity("Controller Service", func(sc *SanityContext) {
&csi.ControllerUnpublishVolumeRequest{
VolumeId: vol.GetVolume().GetVolumeId(),
// NodeID is optional in ControllerUnpublishVolume
NodeId: nid.GetNodeId(),
NodeId: ni.GetNodeId(),
Secrets: sc.Secrets.ControllerUnpublishVolumeSecret,
},
)
Expand Down
34 changes: 21 additions & 13 deletions pkg/sanity/node.go
Original file line number Diff line number Diff line change
Expand Up @@ -625,6 +625,21 @@ var _ = DescribeSanity("Node Service", func(sc *SanityContext) {
It("should work", func() {
name := UniqueString("sanity-node-full")

By("getting node information")
ni, err := c.NodeGetInfo(
context.Background(),
&csi.NodeGetInfoRequest{})
Expect(err).NotTo(HaveOccurred())
Expect(ni).NotTo(BeNil())
Expect(ni.GetNodeId()).NotTo(BeEmpty())

var accReqs *csi.TopologyRequirement
if ni.AccessibleTopology != nil {
accReqs = &csi.TopologyRequirement{
Requisite: []*csi.Topology{ni.AccessibleTopology},
}
}

// Create Volume First
By("creating a single node writer volume")
vol, err := s.CreateVolume(
Expand All @@ -641,8 +656,9 @@ var _ = DescribeSanity("Node Service", func(sc *SanityContext) {
},
},
},
Secrets: sc.Secrets.CreateVolumeSecret,
Parameters: sc.Config.TestVolumeParameters,
Secrets: sc.Secrets.CreateVolumeSecret,
Parameters: sc.Config.TestVolumeParameters,
AccessibilityRequirements: accReqs,
},
)
Expect(err).NotTo(HaveOccurred())
Expand All @@ -651,14 +667,6 @@ var _ = DescribeSanity("Node Service", func(sc *SanityContext) {
Expect(vol.GetVolume().GetVolumeId()).NotTo(BeEmpty())
cl.RegisterVolume(name, VolumeInfo{VolumeID: vol.GetVolume().GetVolumeId()})

By("getting a node id")
nid, err := c.NodeGetInfo(
context.Background(),
&csi.NodeGetInfoRequest{})
Expect(err).NotTo(HaveOccurred())
Expect(nid).NotTo(BeNil())
Expect(nid.GetNodeId()).NotTo(BeEmpty())

var conpubvol *csi.ControllerPublishVolumeResponse
if controllerPublishSupported {
By("controller publishing volume")
Expand All @@ -667,7 +675,7 @@ var _ = DescribeSanity("Node Service", func(sc *SanityContext) {
context.Background(),
&csi.ControllerPublishVolumeRequest{
VolumeId: vol.GetVolume().GetVolumeId(),
NodeId: nid.GetNodeId(),
NodeId: ni.GetNodeId(),
VolumeCapability: &csi.VolumeCapability{
AccessType: &csi.VolumeCapability_Mount{
Mount: &csi.VolumeCapability_MountVolume{},
Expand All @@ -682,7 +690,7 @@ var _ = DescribeSanity("Node Service", func(sc *SanityContext) {
},
)
Expect(err).NotTo(HaveOccurred())
cl.RegisterVolume(name, VolumeInfo{VolumeID: vol.GetVolume().GetVolumeId(), NodeID: nid.GetNodeId()})
cl.RegisterVolume(name, VolumeInfo{VolumeID: vol.GetVolume().GetVolumeId(), NodeID: ni.GetNodeId()})
Expect(conpubvol).NotTo(BeNil())
}
// NodeStageVolume
Expand Down Expand Up @@ -782,7 +790,7 @@ var _ = DescribeSanity("Node Service", func(sc *SanityContext) {
context.Background(),
&csi.ControllerUnpublishVolumeRequest{
VolumeId: vol.GetVolume().GetVolumeId(),
NodeId: nid.GetNodeId(),
NodeId: ni.GetNodeId(),
Secrets: sc.Secrets.ControllerUnpublishVolumeSecret,
},
)
Expand Down