You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The default-resource-limits.yaml
below provides an example of setting resource request and limits for each component.
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
resources:
limits:
cpu: 1
memory: 500Mi
requests:
cpu: 500m
memory: 100Mi
presto:
spec:
coordinator:
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
worker:
replicas: 0
resources:
limits:
cpu: 8
memory: 8Gi
requests:
cpu: 4
memory: 2Gi
hive:
spec:
metastore:
resources:
limits:
cpu: 4
memory: 2Gi
requests:
cpu: 500m
memory: 650Mi
storage:
class: null
create: true
size: 5Gi
server:
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
You can run the metering components on specific sets of nodes. Set the nodeSelector
on a metering component to control where the component is scheduled. The node-selectors.yaml
file below provides an example of setting node selectors for each component.
Note
|
Add the |
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
nodeSelector:
"node-role.kubernetes.io/infra": "" (1)
presto:
spec:
coordinator:
nodeSelector:
"node-role.kubernetes.io/infra": "" (1)
worker:
nodeSelector:
"node-role.kubernetes.io/infra": "" (1)
hive:
spec:
metastore:
nodeSelector:
"node-role.kubernetes.io/infra": "" (1)
server:
nodeSelector:
"node-role.kubernetes.io/infra": "" (1)
-
Add a
nodeSelector
parameter with the appropriate value to the component you want to move. You can use anodeSelector
in the format shown or use key-value pairs, based on the value specified for the node.
Note
|
Add the |
You can verify the metering node selectors by performing any of the following checks:
-
Verify that all pods for metering are correctly scheduled on the IP of the node that is configured in the
MeteringConfig
custom resource:-
Check all pods in the
openshift-metering
namespace:$ oc --namespace openshift-metering get pods -o wide
The output shows the
NODE
and correspondingIP
for each pod running in theopenshift-metering
namespace.Example outputNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hive-metastore-0 1/2 Running 0 4m33s 10.129.2.26 ip-10-0-210-167.us-east-2.compute.internal <none> <none> hive-server-0 2/3 Running 0 4m21s 10.128.2.26 ip-10-0-150-175.us-east-2.compute.internal <none> <none> metering-operator-964b4fb55-4p699 2/2 Running 0 7h30m 10.131.0.33 ip-10-0-189-6.us-east-2.compute.internal <none> <none> nfs-server 1/1 Running 0 7h30m 10.129.2.24 ip-10-0-210-167.us-east-2.compute.internal <none> <none> presto-coordinator-0 2/2 Running 0 4m8s 10.131.0.35 ip-10-0-189-6.us-east-2.compute.internal <none> <none> reporting-operator-869b854c78-8g2x5 1/2 Running 0 7h27m 10.128.2.25 ip-10-0-150-175.us-east-2.compute.internal <none> <none>
-
Compare the nodes in the
openshift-metering
namespace to each nodeNAME
in your cluster:$ oc get nodes
Example outputNAME STATUS ROLES AGE VERSION ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.20.0+6025c28 ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.20.0+6025c28 ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.20.0+6025c28 ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.20.0+6025c28 ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.20.0+6025c28 ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.20.0+6025c28
-
-
Verify that the node selector configuration in the
MeteringConfig
custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand pods are scheduled.-
Check the cluster-wide
Scheduler
object for thespec.defaultNodeSelector
field, which shows where pods are scheduled by default:$ oc get schedulers.config.openshift.io cluster -o yaml
-