Skip to content

Commit 32824f6

Browse files
committed
Follow-up edits to PR#3388
1 parent 6d24343 commit 32824f6

File tree

1 file changed

+111
-81
lines changed

1 file changed

+111
-81
lines changed

install_config/storage_examples/gluster_dynamic_example.adoc

Lines changed: 111 additions & 81 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
[[install-config-storage-examples-gluster-dynamic-example]]
2-
= Complete Example of Dynamic Provisioning Using GlusterFS
2+
= Complete Example of Dynamic Provisioning Using GlusterFS
33
{product-author}
44
{product-version}
55
:data-uri:
@@ -11,34 +11,45 @@
1111

1212
toc::[]
1313

14+
1415
[NOTE]
1516
====
16-
This example assumes a working {product-title} installed and functioning along with Heketi and GlusterFS
17-
====
18-
[NOTE]
19-
====
20-
All `oc ...` commands are executed on the {product-title} master host.
17+
This example assumes a working {product-title} installed and functioning along
18+
with Heketi and GlusterFS.
19+
20+
All `oc` commands are executed on the {product-title} master host.
2121
====
2222

2323
== Overview
2424

25-
This topic provides an end-to-end example of how to dynamically provision GlusterFS volumes. In this example a simple NGINX HelloWorld application will be deployed using the
26-
link:https://access.redhat.com/documentation/en/red-hat-gluster-storage/3.1/paged/container-native-storage-for-openshift-container-platform/chapter-2-red-hat-gluster-storage-container-native-with-openshift-container-platform[ Red Hat Container Native Storage (CNS)] solution. CNS hyper-converges GlusterFS storage by containerizing it into the {product-title} cluster.
25+
This topic provides an end-to-end example of how to dynamically provision
26+
GlusterFS volumes. In this example, a simple NGINX HelloWorld application is
27+
deployed using the
28+
link:https://access.redhat.com/documentation/en/red-hat-gluster-storage/3.1/paged/container-native-storage-for-openshift-container-platform/chapter-2-red-hat-gluster-storage-container-native-with-openshift-container-platform[
29+
Red Hat Container Native Storage (CNS)] solution. CNS hyper-converges GlusterFS
30+
storage by containerizing it into the {product-title} cluster.
2731

28-
The link:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/index.html[Red
29-
Hat Gluster Storage Administration Guide] can also provide additional information about GlusterFS.
32+
The
33+
link:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/index.html[Red
34+
Hat Gluster Storage Administration Guide] can also provide additional
35+
information about GlusterFS.
3036

31-
To help get started follow this link:https://github.com/gluster/gluster-kubernetes[quickstart guide] for an easy Vagrant based installation and deployment of a working {product-title} cluster along with Heketi and GlusterFS containers.
37+
To get started, follow the
38+
link:https://github.com/gluster/gluster-kubernetes[gluster-kubernetes quickstart
39+
guide] for an easy Vagrant-based installation and deployment of a working
40+
{product-title} cluster with Heketi and GlusterFS containers.
3241

33-
== Verify the environment and gather some information to be used in later steps
42+
[[verify-the-environment-and-gather-needed-information]]
43+
== Verify the Environment and Gather Needed Information
3444

3545
[NOTE]
3646
====
37-
At this point, there should be a working {product-title} cluster deployed, and a working Heketi Server along with GlusterFS.
47+
At this point, there should be a working {product-title} cluster deployed, and a
48+
working Heketi server with GlusterFS.
3849
====
3950

40-
====
41-
Verify and View the cluster environment, including nodes and pods.
51+
. Verify and view the cluster environment, including nodes and pods:
52+
+
4253
----
4354
$ oc get nodes,pods
4455
NAME STATUS AGE
@@ -48,52 +59,53 @@ node1 Ready 22h
4859
node2 Ready 22h
4960
NAME READY STATUS RESTARTS AGE 1/1 Running 0 1d
5061
glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0
51-
glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1 <1>
52-
glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
62+
glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1 <1>
63+
glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
5364
heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0 <2>
5465
----
55-
<1> Example of GlusterFS storage pods running (notice there are three for this example).
56-
<2> Heketi Server pod.
66+
<1> Example of GlusterFS storage pods running. There are three in this example.
67+
<2> Heketi server pod.
5768

5869

59-
If not already set in the environment, export the HEKETI_CLI_SERVER
70+
. If not already set in the environment, export the `HEKETI_CLI_SERVER`:
71+
+
6072
----
6173
$ export HEKETI_CLI_SERVER=$(oc describe svc/heketi | grep "Endpoints:" | awk '{print "http://"$2}')
6274
----
6375

64-
Identify the Heketi REST URL and Server IP Address:
76+
. Identify the Heketi REST URL and server IP address:
77+
+
6578
----
6679
$ echo $HEKETI_CLI_SERVER
6780
http://10.42.0.0:8080
6881
----
6982

70-
71-
Identify the Gluster EndPoints that are needed to pass in as a parameter into the StorageClass which will be used in a later step (heketi-storage-endpoints).
83+
. Identify the Gluster endpoints that are needed to pass in as a parameter into
84+
the storage class, which is used in a later step (`heketi-storage-endpoints`).
85+
+
7286
----
73-
oc get endpoints
87+
$ oc get endpoints
7488
NAME ENDPOINTS AGE
7589
heketi 10.42.0.0:8080 22h
7690
heketi-storage-endpoints 192.168.10.100:1,192.168.10.101:1,192.168.10.102:1 22h <1>
7791
kubernetes 192.168.10.90:6443 23h
7892
----
79-
<1> The defined GlusterFS EndPoints, in this example, they are called `heketi-storage-endpoints`.
80-
93+
<1> The defined GlusterFS endpoints. In this example, they are called `heketi-storage-endpoints`.
8194

8295
[NOTE]
83-
By default, user_authorization is disabled, but if it were enabled, you might also need to find the rest user
84-
and rest user secret key (not applicable for this example as any values will work).
85-
96+
====
97+
By default, `user_authorization` is disabled. If enabled, you may need to find
98+
the rest user and rest user secret key. (This is not applicable for this
99+
example, as any values will work).
86100
====
87101

102+
[[create-a-storage-class-for-your-glusterfs-dynamic-provisioner]]
103+
== Create a Storage Class for Your GlusterFS Dynamic Provisioner
88104

89-
90-
91-
== Create a StorageClass for our GlusterFS Dynamic Provisioner
92-
93-
xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Storage Classes]
94-
are used to manage and enable Persistent Storage in {product-title}. Below is an example of a _Storage Class_ that will request
95-
5GB of on-demand storage to be used with our _HelloWorld_ application.
96-
105+
xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Storage
106+
classes] manage and enable persistent storage in {product-title}.
107+
Below is an example of a _Storage class_ requesting 5GB of on-demand
108+
storage to be used with your _HelloWorld_ application.
97109

98110
====
99111
----
@@ -108,32 +120,37 @@ provisioner: kubernetes.io/glusterfs <2>
108120
restuser: "joe" <5>
109121
restuserkey: "My Secret Life" <6>
110122
----
111-
<1> Name of the Storage Class.
112-
<2> Provisioner.
113-
<3> GlusterFS defined EndPoint (oc get endpoints).
114-
<4> Heketi REST Url, taken from Step 1 above (echo $HEKETI_CLI_SERVER).
115-
<5> Restuser, can be anything since authorization is turned off.
116-
<6> Restuserkey, like Restuser, can be anything.
117-
123+
<1> Name of the storage class.
124+
<2> The provisioner.
125+
<3> The GlusterFS-defined endpoint (`oc get endpoints`).
126+
<4> Heketi REST URL, taken from Step 1 above (`echo $HEKETI_CLI_SERVER`).
127+
<5> Rest username. This can be any value since authorization is turned off.
128+
<6> Rest user key. This can be any value.
129+
====
118130

119-
Create the Storage Class YAML file. Save it. Then submit it to {product-title}.
131+
. Create the Storage Class YAML file, save it, then submit it to {product-title}:
132+
+
120133
----
121-
oc create -f gluster-storage-class.yaml
134+
$ oc create -f gluster-storage-class.yaml
122135
storageclass "gluster-heketi" created
123136
----
124137

125-
View the Storage Class.
138+
. View the storage class:
139+
+
126140
----
127-
oc get storageclass
141+
$ oc get storageclass
128142
NAME TYPE
129143
gluster-heketi kubernetes.io/glusterfs
130144
----
131-
====
132145

133-
== Create a PersistentVolumeClaim (PVC) to request storage for our HelloWorld application.
146+
[[create-a-pvc-ro-request-storage-for-your-application]]
147+
== Create a PVC to Request Storage for Your Application
134148

135-
Next, we will create a PVC that will request 5GB of storage, at which time, the Dynamic Provisioning Framework and Heketi
136-
will automatically provision a new GlusterFS volume and generate the PersistentVolume (PV) object.
149+
. Create a persistent volume claim (PVC) requesting 5GB of storage.
150+
+
151+
During that time, the Dynamic Provisioning Framework and Heketi will
152+
automatically provision a new GlusterFS volume and generate the persistent volume
153+
(PV) object:
137154

138155
====
139156
----
@@ -150,35 +167,39 @@ spec:
150167
requests:
151168
storage: 5Gi <2>
152169
----
153-
<1> The Kubernetes Storage Class annotation and the name of the Storage Class.
170+
<1> The Kubernetes storage class annotation and the name of the storage class.
154171
<2> The amount of storage requested.
172+
====
155173

156-
157-
Create the PVC YAML file. Save it. Then submit it to {product-title}.
174+
. Create the PVC YAML file, save it, then submit it to {product-title}:
175+
+
158176
----
159-
oc create -f gluster-pvc.yaml
177+
$ oc create -f gluster-pvc.yaml
160178
persistentvolumeclaim "gluster1" created
161179
----
162180

163-
View the PVC, and notice that it is bound to a dynamically created volume.
181+
. View the PVC:
182+
+
164183
----
165-
oc get pvc
184+
$ oc get pvc
166185
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
167186
gluster1 Bound pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO 14h
168187
----
188+
+
189+
Notice that the PVC is bound to a dynamically created volume.
169190

170-
Also view the Persistent Volume (PV).
191+
. View the persistent volume (PV):
192+
+
171193
----
172-
oc get pv
194+
$ oc get pv
173195
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
174196
pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO Delete Bound default/gluster1 14h
175197
----
176-
====
177198

178-
== Create a NGINX pod that uses the PVC
199+
== Create a NGINX Pod That Uses the PVC
179200

180-
At this point we have a dynamically created GlusterFS volume, bound to a PersistentVolumeClaim, we can now utilize this claim
181-
in a pod. We will create a simple NGINX pod.
201+
At this point, you have a dynamically created GlusterFS volume, bound to a PVC.
202+
Now, you can use this claim in a pod. Create a simple NGINX pod:
182203

183204
====
184205
----
@@ -205,45 +226,57 @@ spec:
205226
persistentVolumeClaim:
206227
claimName: gluster1 <1>
207228
----
208-
<1> The name of the PVC created in previous step.
209-
229+
<1> The name of the PVC created in the previous step.
230+
====
210231

211-
Create the Pod YAML file. Save it. Then submit it to {product-title}.
232+
. Create the Pod YAML file, save it, then submit it to {product-title}:
233+
+
212234
----
213-
oc create -f nginx-pod.yaml
235+
$ oc create -f nginx-pod.yaml
214236
pod "gluster-pod1" created
215237
----
216238

217-
View the Pod (Give it a few minutes, it might need to download the image if it doesn't already exist).
239+
. View the pod:
240+
+
218241
----
219-
oc get pods -o wide
242+
$ oc get pods -o wide
220243
NAME READY STATUS RESTARTS AGE IP NODE
221-
nginx-pod 1/1 Running 0 9m 10.38.0.0 node1
244+
nginx-pod 1/1 Running 0 9m 10.38.0.0 node1
222245
glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0
223246
glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1
224247
glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
225248
heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0
226249
----
250+
+
251+
[NOTE]
252+
====
253+
This may take a few minutes, as the the pod may need to download the image if it does not already exist.
254+
====
227255

228-
Now we will exec into the container and create an index.html file in the mountPath definition of the Pod.
256+
. `oc exec` into the container and create an *_index.html_* file in the
257+
`mountPath` definition of the pod:
258+
+
229259
----
230-
oc exec -ti nginx-pod /bin/sh
260+
$ oc exec -ti nginx-pod /bin/sh
231261
$ cd /usr/share/nginx/html
232262
$ echo 'Hello World from GlusterFS!!!' > index.html
233263
$ ls
234264
index.html
235265
$ exit
236266
----
237267

238-
Using the _curl_ command from the master node, curl the URL of the pod.
268+
. Using the `curl` command from the master node, `curl` the URL of the pod:
269+
+
239270
----
240-
curl http://10.38.0.0
271+
$ curl http://10.38.0.0
241272
Hello World from GlusterFS!!!
242273
----
243274

244-
Lastly, check our gluster pod, to see the index.html file that was written. Choose any of the gluster pods.
275+
. Check your Gluster pod to ensure that the *_index.html_* file was written.
276+
Choose any of the Gluster pods:
277+
+
245278
----
246-
oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh
279+
$ oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh
247280
$ mount | grep heketi
248281
/dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
249282
/dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
@@ -252,9 +285,6 @@ $ mount | grep heketi
252285
$ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
253286
$ ls
254287
index.html
255-
$ cat index.html
288+
$ cat index.html
256289
Hello World from GlusterFS!!!
257290
----
258-
====
259-
260-

0 commit comments

Comments
 (0)