|
| 1 | +# Profiling with Pprof |
| 2 | + |
| 3 | +!!! warning |
| 4 | +Pprof bind port flag are available as an alpha release and are subject to change in future versions. |
| 5 | + |
| 6 | +[Pprof][pprof] is a useful tool for analyzing memory and CPU usage profiles. However, it is not |
| 7 | +recommended to enable it by default in production environments. While it is great for troubleshooting, |
| 8 | +keeping it enabled can introduce performance concerns and potential information leaks. |
| 9 | + |
| 10 | +Both components allow you to enable pprof by specifying the port it should bind to using the |
| 11 | +`pprof-bind-address` flag. However, you must ensure that each component uses a unique port—using |
| 12 | +the same port for multiple components is not allowed. Additionally, you need to export the corresponding |
| 13 | +port in the service configuration for each component. |
| 14 | + |
| 15 | +The following steps are examples to demonstrate the required changes to enable Pprof for Operator-Controller and CatalogD. |
| 16 | + |
| 17 | +## Enabling Pprof for gathering the data |
| 18 | + |
| 19 | +### For Operator-Controller |
| 20 | + |
| 21 | +1. Run the following command to patch the Deployment and add the `--pprof-bind-address=:8082` flag: |
| 22 | + |
| 23 | +```shell |
| 24 | +kubectl patch deployment $(kubectl get deployments -n olmv1-system -l control-plane=operator-controller-controller-manager -o jsonpath='{.items[0].metadata.name}') \ |
| 25 | +-n olmv1-system --type='json' -p='[ |
| 26 | + { |
| 27 | + "op": "add", |
| 28 | + "path": "/spec/template/spec/containers/0/args/-", |
| 29 | + "value": "--pprof-bind-address=:8082" |
| 30 | + } |
| 31 | +]' |
| 32 | +``` |
| 33 | + |
| 34 | +2. Once Pprof is enabled, you need to export port `8082` in the Service to make it accessible: |
| 35 | + |
| 36 | +```shell |
| 37 | +kubectl patch service operator-controller-service -n olmv1-system --type='json' -p='[ |
| 38 | + { |
| 39 | + "op": "add", |
| 40 | + "path": "/spec/ports/-", |
| 41 | + "value": { |
| 42 | + "name": "pprof", |
| 43 | + "port": 8082, |
| 44 | + "targetPort": 8082, |
| 45 | + "protocol": "TCP" |
| 46 | + } |
| 47 | + } |
| 48 | +]' |
| 49 | +``` |
| 50 | + |
| 51 | +3. Create the Pod with `curl` to allow to generate the report: |
| 52 | + |
| 53 | +```shell |
| 54 | +kubectl apply -f - <<EOF |
| 55 | +apiVersion: v1 |
| 56 | +kind: Pod |
| 57 | +metadata: |
| 58 | + name: curl-oper-con-pprof |
| 59 | + namespace: olmv1-system |
| 60 | +spec: |
| 61 | + serviceAccountName: operator-controller-controller-manager |
| 62 | + securityContext: |
| 63 | + seccompProfile: |
| 64 | + type: RuntimeDefault |
| 65 | + containers: |
| 66 | + - name: curl |
| 67 | + image: curlimages/curl:latest |
| 68 | + command: |
| 69 | + - sh |
| 70 | + - -c |
| 71 | + - sleep 3600 |
| 72 | + securityContext: |
| 73 | + runAsNonRoot: true |
| 74 | + readOnlyRootFilesystem: false |
| 75 | + runAsUser: 1000 |
| 76 | + runAsGroup: 1000 |
| 77 | + allowPrivilegeEscalation: false |
| 78 | + capabilities: |
| 79 | + drop: |
| 80 | + - ALL |
| 81 | + volumeMounts: |
| 82 | + - mountPath: /tmp |
| 83 | + name: tmp-volume |
| 84 | + restartPolicy: Never |
| 85 | + volumes: |
| 86 | + - name: tmp-volume |
| 87 | + emptyDir: {} |
| 88 | +EOF |
| 89 | +``` |
| 90 | + |
| 91 | +4. Run the following command to generate the token for authentication: |
| 92 | + |
| 93 | +```shell |
| 94 | +TOKEN=$(kubectl create token operator-controller-controller-manager -n olmv1-system) |
| 95 | +echo $TOKEN |
| 96 | +``` |
| 97 | + |
| 98 | +5. Run the following command to generate the report in the Pod: |
| 99 | + |
| 100 | +```shell |
| 101 | +kubectl exec -it curl-oper-con-pprof -n olmv1-system -- sh -c \ |
| 102 | +"curl -s -k -H \"Authorization: Bearer $TOKEN\" \ |
| 103 | +http://operator-controller-service.olmv1-system.svc.cluster.local:8082/debug/pprof/profile > /tmp/operator-controller-profile.pprof" |
| 104 | +``` |
| 105 | + |
| 106 | +6. Now, we can verify that the report was successfully created: |
| 107 | + |
| 108 | +```shell |
| 109 | +kubectl exec -it curl-oper-con-pprof -n olmv1-system -- ls -lh /tmp/ |
| 110 | +``` |
| 111 | + |
| 112 | +7. Then, we can copy the result for your local environment: |
| 113 | + |
| 114 | +```shell |
| 115 | +kubectl cp olmv1-system/curl-oper-con-pprof:/tmp/operator-controller-profile.pprof ./operator-controller-profile.pprof |
| 116 | +tar: removing leading '/' from member names |
| 117 | +``` |
| 118 | + |
| 119 | +8. By last, we can use pprof to analyse the result: |
| 120 | + |
| 121 | +```shell |
| 122 | +go tool pprof -http=:8080 ./operator-controller-profile.pprof |
| 123 | +``` |
| 124 | + |
| 125 | +### For the CatalogD |
| 126 | + |
| 127 | +1. Run the following command to patch the Deployment and add the `--pprof-bind-address=:8083` flag: |
| 128 | + |
| 129 | +```shell |
| 130 | +kubectl patch deployment $(kubectl get deployments -n olmv1-system -l control-plane=catalogd-controller-manager -o jsonpath='{.items[0].metadata.name}') \ |
| 131 | +-n olmv1-system --type='json' -p='[ |
| 132 | + { |
| 133 | + "op": "add", |
| 134 | + "path": "/spec/template/spec/containers/0/args/-", |
| 135 | + "value": "--pprof-bind-address=:8083" |
| 136 | + } |
| 137 | +]' |
| 138 | +``` |
| 139 | + |
| 140 | +2. Once Pprof is enabled, you need to export port `8083` in the `Service` to make it accessible: |
| 141 | + |
| 142 | +```shell |
| 143 | +kubectl patch service $(kubectl get service -n olmv1-system -l app.kubernetes.io/part-of=olm,app.kubernetes.io/name=catalogd -o jsonpath='{.items[0].metadata.name}') \ |
| 144 | +-n olmv1-system --type='json' -p='[ |
| 145 | + { |
| 146 | + "op": "add", |
| 147 | + "path": "/spec/ports/-", |
| 148 | + "value": { |
| 149 | + "name": "pprof", |
| 150 | + "port": 8083, |
| 151 | + "targetPort": 8083, |
| 152 | + "protocol": "TCP" |
| 153 | + } |
| 154 | + } |
| 155 | +]' |
| 156 | +``` |
| 157 | + |
| 158 | +3. Create the Pod with `curl` to allow to generate the report: |
| 159 | + |
| 160 | +```shell |
| 161 | +kubectl apply -f - <<EOF |
| 162 | +apiVersion: v1 |
| 163 | +kind: Pod |
| 164 | +metadata: |
| 165 | + name: curl-catalogd-pprof |
| 166 | + namespace: olmv1-system |
| 167 | +spec: |
| 168 | + serviceAccountName: catalogd-controller-manager |
| 169 | + securityContext: |
| 170 | + seccompProfile: |
| 171 | + type: RuntimeDefault |
| 172 | + containers: |
| 173 | + - name: curl |
| 174 | + image: curlimages/curl:latest |
| 175 | + command: |
| 176 | + - sh |
| 177 | + - -c |
| 178 | + - sleep 3600 |
| 179 | + securityContext: |
| 180 | + runAsNonRoot: true |
| 181 | + readOnlyRootFilesystem: false |
| 182 | + runAsUser: 1000 |
| 183 | + runAsGroup: 1000 |
| 184 | + allowPrivilegeEscalation: false |
| 185 | + capabilities: |
| 186 | + drop: |
| 187 | + - ALL |
| 188 | + volumeMounts: |
| 189 | + - mountPath: /tmp |
| 190 | + name: tmp-volume |
| 191 | + restartPolicy: Never |
| 192 | + volumes: |
| 193 | + - name: tmp-volume |
| 194 | + emptyDir: {} |
| 195 | +EOF |
| 196 | +``` |
| 197 | + |
| 198 | +4. Run the following command to generate the token for authentication: |
| 199 | + |
| 200 | +```shell |
| 201 | +TOKEN=$(kubectl create token catalogd-controller-manager -n olmv1-system) |
| 202 | +echo $TOKEN |
| 203 | +``` |
| 204 | + |
| 205 | +5. Run the following command to generate the report in the Pod: |
| 206 | + |
| 207 | +```shell |
| 208 | +kubectl exec -it curl-catalogd-pprof -n olmv1-system -- sh -c \ |
| 209 | +"curl -s -k -H \"Authorization: Bearer $TOKEN\" \ |
| 210 | +http://catalogd-service.olmv1-system.svc.cluster.local:8083/debug/pprof/profile > /tmp/catalogd-profile.pprof" |
| 211 | +``` |
| 212 | + |
| 213 | +6. Now, we can verify that the report was successfully created: |
| 214 | + |
| 215 | +```shell |
| 216 | +kubectl exec -it curl-catalogd-pprof -n olmv1-system -- ls -lh /tmp/ |
| 217 | +``` |
| 218 | + |
| 219 | +7. Then, we can copy the result for your local environment: |
| 220 | + |
| 221 | +```shell |
| 222 | +kubectl cp olmv1-system/curl-catalogd-pprof:/tmp/catalogd-profile.pprof ./catalogd-profile.pprof |
| 223 | +``` |
| 224 | + |
| 225 | +8. By last, we can use pprof to analyse the result: |
| 226 | + |
| 227 | +```shell |
| 228 | +go tool pprof -http=:8080 ./catalogd-profile.pprof |
| 229 | +``` |
| 230 | + |
| 231 | +## Disabling pprof after gathering the data |
| 232 | + |
| 233 | +### For Operator-Controller |
| 234 | + |
| 235 | +1. Run the following command to bind to `--pprof-bind-address` the value `0` in order to disable the endpoint. |
| 236 | + |
| 237 | +```shell |
| 238 | +kubectl patch deployment $(kubectl get deployments -n olmv1-system -l control-plane=operator-controller-controller-manager -o jsonpath='{.items[0].metadata.name}') \ |
| 239 | +-n olmv1-system --type='json' -p='[ |
| 240 | + { |
| 241 | + "op": "replace", |
| 242 | + "path": "/spec/template/spec/containers/0/args", |
| 243 | + "value": ["--pprof-bind-address=0"] |
| 244 | + } |
| 245 | +]' |
| 246 | +``` |
| 247 | + |
| 248 | +2. Try to generate the report as done previously. The connection should now be refused: |
| 249 | + |
| 250 | +```shell |
| 251 | +kubectl exec -it curl-pprof -n olmv1-system -- sh -c \ |
| 252 | +"curl -s -k -H \"Authorization: Bearer $TOKEN\" \ |
| 253 | +http://operator-controller-service.olmv1-system.svc.cluster.local:8082/debug/pprof/profile > /tmp/operator-controller-profile.pprof" |
| 254 | +``` |
| 255 | + |
| 256 | +**NOTE:** if you wish you can delete the service port added to allow use pprof and |
| 257 | +re-start the deployment `kubectl rollout restart deployment -n olmv1-system operator-controller-controller-manager` |
| 258 | + |
| 259 | +3. We can remove the Pod created to generate the report: |
| 260 | + |
| 261 | +```shell |
| 262 | +kubectl delete pod curl-oper-con-pprof -n olmv1-system |
| 263 | +``` |
| 264 | + |
| 265 | +### For CatalogD |
| 266 | + |
| 267 | +1. Run the following command to bind to `--pprof-bind-address` the value `0` in order to disable the endpoint. |
| 268 | +```shell |
| 269 | +kubectl patch deployment $(kubectl get deployments -n olmv1-system -l control-plane=catalogd-controller-manager -o jsonpath='{.items[0].metadata.name}') \ |
| 270 | +-n olmv1-system --type='json' -p='[ |
| 271 | + { |
| 272 | + "op": "replace", |
| 273 | + "path": "/spec/template/spec/containers/0/args", |
| 274 | + "value": ["--pprof-bind-address=0"] |
| 275 | + } |
| 276 | +]' |
| 277 | +``` |
| 278 | + |
| 279 | +2. To ensure we can try to generate the report as done above. Note that the connection |
| 280 | +should be refused: |
| 281 | + |
| 282 | +```shell |
| 283 | +kubectl exec -it curl-pprof -n olmv1-system -- sh -c \ |
| 284 | +"curl -s -k -H \"Authorization: Bearer $TOKEN\" \ |
| 285 | +http://catalogd-service.olmv1-system.svc.cluster.local:8083/debug/pprof/profile > /tmp/catalogd-profile.pprof" |
| 286 | +``` |
| 287 | + |
| 288 | +**NOTE:** if you wish you can delete the service port added to allow use pprof and |
| 289 | +re-start the deployment `kubectl rollout restart deployment -n olmv1-system catalogd-controller-manager` |
| 290 | + |
| 291 | +3. We can remove the Pod created to generate the report: |
| 292 | + |
| 293 | +```shell |
| 294 | +kubectl delete pod curl-catalogd-pprof -n olmv1-system |
| 295 | +``` |
| 296 | + |
| 297 | +[pprof]: https://github.com/google/pprof/blob/main/doc/README.md |
0 commit comments