1
1
[[install-config-storage-examples-gluster-dynamic-example]]
2
- = Complete Example of Dynamic Provisioning Using GlusterFS
2
+ = Complete Example of Dynamic Provisioning Using GlusterFS
3
3
{product-author}
4
4
{product-version}
5
5
:data-uri:
11
11
12
12
toc::[]
13
13
14
+
14
15
[NOTE]
15
16
====
16
- This example assumes a working {product-title} installed and functioning along with Heketi and GlusterFS
17
- ====
18
- [NOTE]
19
- ====
20
- All `oc ...` commands are executed on the {product-title} master host.
17
+ This example assumes a working {product-title} installed and functioning along
18
+ with Heketi and GlusterFS.
19
+
20
+ All `oc` commands are executed on the {product-title} master host.
21
21
====
22
22
23
23
== Overview
24
24
25
- This topic provides an end-to-end example of how to dynamically provision GlusterFS volumes. In this example a simple NGINX HelloWorld application will be deployed using the
26
- link:https://access.redhat.com/documentation/en/red-hat-gluster-storage/3.1/paged/container-native-storage-for-openshift-container-platform/chapter-2-red-hat-gluster-storage-container-native-with-openshift-container-platform[ Red Hat Container Native Storage (CNS)] solution. CNS hyper-converges GlusterFS storage by containerizing it into the {product-title} cluster.
25
+ This topic provides an end-to-end example of how to dynamically provision
26
+ GlusterFS volumes. In this example, a simple NGINX HelloWorld application is
27
+ deployed using the
28
+ link:https://access.redhat.com/documentation/en/red-hat-gluster-storage/3.1/paged/container-native-storage-for-openshift-container-platform/chapter-2-red-hat-gluster-storage-container-native-with-openshift-container-platform[
29
+ Red Hat Container Native Storage (CNS)] solution. CNS hyper-converges GlusterFS
30
+ storage by containerizing it into the {product-title} cluster.
27
31
28
- The link:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/index.html[Red
29
- Hat Gluster Storage Administration Guide] can also provide additional information about GlusterFS.
32
+ The
33
+ link:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/index.html[Red
34
+ Hat Gluster Storage Administration Guide] can also provide additional
35
+ information about GlusterFS.
30
36
31
- To help get started follow this link:https://github.com/gluster/gluster-kubernetes[quickstart guide] for an easy Vagrant based installation and deployment of a working {product-title} cluster along with Heketi and GlusterFS containers.
37
+ To get started, follow the
38
+ link:https://github.com/gluster/gluster-kubernetes[gluster-kubernetes quickstart
39
+ guide] for an easy Vagrant-based installation and deployment of a working
40
+ {product-title} cluster with Heketi and GlusterFS containers.
32
41
33
- == Verify the environment and gather some information to be used in later steps
42
+ [[verify-the-environment-and-gather-needed-information]]
43
+ == Verify the Environment and Gather Needed Information
34
44
35
45
[NOTE]
36
46
====
37
- At this point, there should be a working {product-title} cluster deployed, and a working Heketi Server along with GlusterFS.
47
+ At this point, there should be a working {product-title} cluster deployed, and a
48
+ working Heketi server with GlusterFS.
38
49
====
39
50
40
- ====
41
- Verify and View the cluster environment, including nodes and pods.
51
+ . Verify and view the cluster environment, including nodes and pods:
52
+ +
42
53
----
43
54
$ oc get nodes,pods
44
55
NAME STATUS AGE
@@ -48,52 +59,53 @@ node1 Ready 22h
48
59
node2 Ready 22h
49
60
NAME READY STATUS RESTARTS AGE 1/1 Running 0 1d
50
61
glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0
51
- glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1 <1>
52
- glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
62
+ glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1 <1>
63
+ glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
53
64
heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0 <2>
54
65
----
55
- <1> Example of GlusterFS storage pods running (notice there are three for this example) .
56
- <2> Heketi Server pod.
66
+ <1> Example of GlusterFS storage pods running. There are three in this example.
67
+ <2> Heketi server pod.
57
68
58
69
59
- If not already set in the environment, export the HEKETI_CLI_SERVER
70
+ . If not already set in the environment, export the `HEKETI_CLI_SERVER`:
71
+ +
60
72
----
61
73
$ export HEKETI_CLI_SERVER=$(oc describe svc/heketi | grep "Endpoints:" | awk '{print "http://"$2}')
62
74
----
63
75
64
- Identify the Heketi REST URL and Server IP Address:
76
+ . Identify the Heketi REST URL and server IP address:
77
+ +
65
78
----
66
79
$ echo $HEKETI_CLI_SERVER
67
80
http://10.42.0.0:8080
68
81
----
69
82
70
-
71
- Identify the Gluster EndPoints that are needed to pass in as a parameter into the StorageClass which will be used in a later step (heketi-storage-endpoints).
83
+ . Identify the Gluster endpoints that are needed to pass in as a parameter into
84
+ the storage class, which is used in a later step (`heketi-storage-endpoints`).
85
+ +
72
86
----
73
- oc get endpoints
87
+ $ oc get endpoints
74
88
NAME ENDPOINTS AGE
75
89
heketi 10.42.0.0:8080 22h
76
90
heketi-storage-endpoints 192.168.10.100:1,192.168.10.101:1,192.168.10.102:1 22h <1>
77
91
kubernetes 192.168.10.90:6443 23h
78
92
----
79
- <1> The defined GlusterFS EndPoints, in this example, they are called `heketi-storage-endpoints`.
80
-
93
+ <1> The defined GlusterFS endpoints. In this example, they are called `heketi-storage-endpoints`.
81
94
82
95
[NOTE]
83
- By default, user_authorization is disabled, but if it were enabled, you might also need to find the rest user
84
- and rest user secret key (not applicable for this example as any values will work).
85
-
96
+ ====
97
+ By default, `user_authorization` is disabled. If enabled, you may need to find
98
+ the rest user and rest user secret key. (This is not applicable for this
99
+ example, as any values will work).
86
100
====
87
101
102
+ [[create-a-storage-class-for-your-glusterfs-dynamic-provisioner]]
103
+ == Create a Storage Class for Your GlusterFS Dynamic Provisioner
88
104
89
-
90
-
91
- == Create a StorageClass for our GlusterFS Dynamic Provisioner
92
-
93
- xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Storage Classes]
94
- are used to manage and enable Persistent Storage in {product-title}. Below is an example of a _Storage Class_ that will request
95
- 5GB of on-demand storage to be used with our _HelloWorld_ application.
96
-
105
+ xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Storage
106
+ classes] manage and enable persistent storage in {product-title}.
107
+ Below is an example of a _Storage class_ requesting 5GB of on-demand
108
+ storage to be used with your _HelloWorld_ application.
97
109
98
110
====
99
111
----
@@ -108,32 +120,37 @@ provisioner: kubernetes.io/glusterfs <2>
108
120
restuser: "joe" <5>
109
121
restuserkey: "My Secret Life" <6>
110
122
----
111
- <1> Name of the Storage Class .
112
- <2> Provisioner .
113
- <3> GlusterFS defined EndPoint ( oc get endpoints).
114
- <4> Heketi REST Url , taken from Step 1 above (echo $HEKETI_CLI_SERVER).
115
- <5> Restuser, can be anything since authorization is turned off.
116
- <6> Restuserkey, like Restuser, can be anything .
117
-
123
+ <1> Name of the storage class .
124
+ <2> The provisioner .
125
+ <3> The GlusterFS- defined endpoint (` oc get endpoints` ).
126
+ <4> Heketi REST URL , taken from Step 1 above (` echo $HEKETI_CLI_SERVER` ).
127
+ <5> Rest username. This can be any value since authorization is turned off.
128
+ <6> Rest user key. This can be any value .
129
+ ====
118
130
119
- Create the Storage Class YAML file. Save it. Then submit it to {product-title}.
131
+ . Create the Storage Class YAML file, save it, then submit it to {product-title}:
132
+ +
120
133
----
121
- oc create -f gluster-storage-class.yaml
134
+ $ oc create -f gluster-storage-class.yaml
122
135
storageclass "gluster-heketi" created
123
136
----
124
137
125
- View the Storage Class.
138
+ . View the storage class:
139
+ +
126
140
----
127
- oc get storageclass
141
+ $ oc get storageclass
128
142
NAME TYPE
129
143
gluster-heketi kubernetes.io/glusterfs
130
144
----
131
- ====
132
145
133
- == Create a PersistentVolumeClaim (PVC) to request storage for our HelloWorld application.
146
+ [[create-a-pvc-ro-request-storage-for-your-application]]
147
+ == Create a PVC to Request Storage for Your Application
134
148
135
- Next, we will create a PVC that will request 5GB of storage, at which time, the Dynamic Provisioning Framework and Heketi
136
- will automatically provision a new GlusterFS volume and generate the PersistentVolume (PV) object.
149
+ . Create a persistent volume claim (PVC) requesting 5GB of storage.
150
+ +
151
+ During that time, the Dynamic Provisioning Framework and Heketi will
152
+ automatically provision a new GlusterFS volume and generate the persistent volume
153
+ (PV) object:
137
154
138
155
====
139
156
----
@@ -150,35 +167,39 @@ spec:
150
167
requests:
151
168
storage: 5Gi <2>
152
169
----
153
- <1> The Kubernetes Storage Class annotation and the name of the Storage Class .
170
+ <1> The Kubernetes storage class annotation and the name of the storage class .
154
171
<2> The amount of storage requested.
172
+ ====
155
173
156
-
157
- Create the PVC YAML file. Save it. Then submit it to {product-title}.
174
+ . Create the PVC YAML file, save it, then submit it to {product-title}:
175
+ +
158
176
----
159
- oc create -f gluster-pvc.yaml
177
+ $ oc create -f gluster-pvc.yaml
160
178
persistentvolumeclaim "gluster1" created
161
179
----
162
180
163
- View the PVC, and notice that it is bound to a dynamically created volume.
181
+ . View the PVC:
182
+ +
164
183
----
165
- oc get pvc
184
+ $ oc get pvc
166
185
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
167
186
gluster1 Bound pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO 14h
168
187
----
188
+ +
189
+ Notice that the PVC is bound to a dynamically created volume.
169
190
170
- Also view the Persistent Volume (PV).
191
+ . View the persistent volume (PV):
192
+ +
171
193
----
172
- oc get pv
194
+ $ oc get pv
173
195
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
174
196
pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO Delete Bound default/gluster1 14h
175
197
----
176
- ====
177
198
178
- == Create a NGINX pod that uses the PVC
199
+ == Create a NGINX Pod That Uses the PVC
179
200
180
- At this point we have a dynamically created GlusterFS volume, bound to a PersistentVolumeClaim, we can now utilize this claim
181
- in a pod. We will create a simple NGINX pod.
201
+ At this point, you have a dynamically created GlusterFS volume, bound to a PVC.
202
+ Now, you can use this claim in a pod. Create a simple NGINX pod:
182
203
183
204
====
184
205
----
@@ -205,45 +226,57 @@ spec:
205
226
persistentVolumeClaim:
206
227
claimName: gluster1 <1>
207
228
----
208
- <1> The name of the PVC created in previous step.
209
-
229
+ <1> The name of the PVC created in the previous step.
230
+ ====
210
231
211
- Create the Pod YAML file. Save it. Then submit it to {product-title}.
232
+ . Create the Pod YAML file, save it, then submit it to {product-title}:
233
+ +
212
234
----
213
- oc create -f nginx-pod.yaml
235
+ $ oc create -f nginx-pod.yaml
214
236
pod "gluster-pod1" created
215
237
----
216
238
217
- View the Pod (Give it a few minutes, it might need to download the image if it doesn't already exist).
239
+ . View the pod:
240
+ +
218
241
----
219
- oc get pods -o wide
242
+ $ oc get pods -o wide
220
243
NAME READY STATUS RESTARTS AGE IP NODE
221
- nginx-pod 1/1 Running 0 9m 10.38.0.0 node1
244
+ nginx-pod 1/1 Running 0 9m 10.38.0.0 node1
222
245
glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0
223
246
glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1
224
247
glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
225
248
heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0
226
249
----
250
+ +
251
+ [NOTE]
252
+ ====
253
+ This may take a few minutes, as the the pod may need to download the image if it does not already exist.
254
+ ====
227
255
228
- Now we will exec into the container and create an index.html file in the mountPath definition of the Pod.
256
+ . `oc exec` into the container and create an *_index.html_* file in the
257
+ `mountPath` definition of the pod:
258
+ +
229
259
----
230
- oc exec -ti nginx-pod /bin/sh
260
+ $ oc exec -ti nginx-pod /bin/sh
231
261
$ cd /usr/share/nginx/html
232
262
$ echo 'Hello World from GlusterFS!!!' > index.html
233
263
$ ls
234
264
index.html
235
265
$ exit
236
266
----
237
267
238
- Using the _curl_ command from the master node, curl the URL of the pod.
268
+ . Using the `curl` command from the master node, `curl` the URL of the pod:
269
+ +
239
270
----
240
- curl http://10.38.0.0
271
+ $ curl http://10.38.0.0
241
272
Hello World from GlusterFS!!!
242
273
----
243
274
244
- Lastly, check our gluster pod, to see the index.html file that was written. Choose any of the gluster pods.
275
+ . Check your Gluster pod to ensure that the *_index.html_* file was written.
276
+ Choose any of the Gluster pods:
277
+ +
245
278
----
246
- oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh
279
+ $ oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh
247
280
$ mount | grep heketi
248
281
/dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
249
282
/dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
@@ -252,9 +285,6 @@ $ mount | grep heketi
252
285
$ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
253
286
$ ls
254
287
index.html
255
- $ cat index.html
288
+ $ cat index.html
256
289
Hello World from GlusterFS!!!
257
290
----
258
- ====
259
-
260
-
0 commit comments