Skip to content

Authentication with default prometheus in 3.11 #92

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rvansa opened this issue Oct 26, 2018 · 10 comments
Closed

Authentication with default prometheus in 3.11 #92

rvansa opened this issue Oct 26, 2018 · 10 comments

Comments

@rvansa
Copy link

rvansa commented Oct 26, 2018

I have a fresh install of Openshift Enterprise 3.11 which secures access to a Prometheus instance with oauth2_proxy, in prometheus-k8s-X/prometheus-proxy. I am trying to login in with credentials for admin (with cluster-admin role) but upon sending those I just get HTTP response 200 and the same login screen. The log shows me that I wasn't allowed:

2018/10/26 15:00:47 provider.go:382: authorizer reason: no RBAC policy matched

But when I have enabled audit log on the master API it seems that I should be:

{"kind":"Event",
 "apiVersion":"audit.k8s.io/v1beta1",
 "metadata":{creationTimestamp":"2018-10-26T15:00:47Z"},
 "level":"Metadata",
 "timestamp":"2018-10-26T15:00:47Z",
 "auditID":"eab256ee-7acc-4e67-af49-98f9db052f55",
 "stage":"RequestReceived",
 "requestURI":"/apis/authorization.k8s.io/v1beta1/subjectaccessreviews",
 "verb":"create",
 "user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","uid":"50806ab7-d62
f-11e8-b3d3-ecb1d78a7a18","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},
 "sourceIPs":["10.128.0.82"],
 "objectRef":{"resource":"subjectaccessreviews","apiGroup":"authorization.k8s.io","apiVersion":"v1beta1"},
 "requestReceivedTimestamp":"2018-10-26T15:00:47.897449Z",
 "stageTimestamp":"2018-10-26T15:00:47.897449Z"
}
{"kind":"Event",
 "apiVersion":"audit.k8s.io/v1beta1",
 "metadata":{"creationTimestamp":"2018-10-26T15:00:47Z"},
 "level":"Metadata",
 "timestamp":"2018-10-26T15:00:47Z",
 "auditID":"eab256ee-7acc-4e67-af49-98f9db052f55",
 "stage":"ResponseComplete",
 "requestURI":"/apis/authorization.k8s.io/v1beta1/subjectaccessreviews",
 "verb":"create",
 "user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","uid":"50806ab7-d62f-11e8-b3d3-ecb1d78a7a18","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},
 "sourceIPs":["10.128.0.82"],
 "objectRef":{"resource":"subjectaccessreviews","apiGroup":"authorization.k8s.io","apiVersion":"v1beta1"},
 "responseStatus":{"metadata":{},"code":201},
 "requestReceivedTimestamp":"2018-10-26T15:00:47.897449Z",
 "stageTimestamp":"2018-10-26T15:00:47.897991Z",
 "annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"prometheus-k8s\" of ClusterRole \"prometheus-k8s\" to ServiceAccount \"prometheus-k8s/openshift
-monitoring\""}
}

Where is the problem?

@rvansa
Copy link
Author

rvansa commented Oct 29, 2018

So I've figured out that

  1. the login/password directly provided in oauth_proxy won't get me there, I should click the 'Log in with OpenShift' button (which was spitting error message about missing params).
  2. While I was tunneling the connection to lab through ssh (-L9091:prometheus-k8s.openshift-monitoring.svc.cluster.local:9091) it gives me not-approved callback upon oauth/start. I need to add <ip> prometheus-k8s-openshift-monitoring.router.default.svc.cluster.local to my /etc/hosts and access that one.

Now, trying to get in 'as service' works when I do curl ... -H "Authorization: Bearer $(oc whoami -t)", so as a cluster admin. However obtaining some service token, e.g. oc sa get-token grafana gives me much longer token than oc whoami -t and I am not let in. Grafana UI works (that's why I've tried to use its token).

And one more question: where do I set the list of approved callbacks? I don't see anything explicit in oauth_proxy arguments as seen in oc describe po prometheus-k8s-0.

@mrogers950
Copy link

@rvansa can you post your oauth-proxy pod configuration and some more details on the exact steps you are taking to log in?
-redirect-url is how you can change the callback, however if you are correctly using a proxy SA that has an associated oauth client you should not normally need to set it.

@rvansa
Copy link
Author

rvansa commented Oct 30, 2018

@mrogers950 This is the proxy container configuration, the prometheus is running in the same pod, having 9090 open:

- args:
        - -provider=openshift
        - -https-address=:9091
        - -http-address=
        - -email-domain=*
        - -upstream=http://localhost:9090
        - -htpasswd-file=/etc/proxy/htpasswd/auth
        - -openshift-service-account=prometheus-k8s
        - '-openshift-sar={"resource": "namespaces", "verb": "get"}'
        - '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}'
        - -tls-cert=/etc/tls/private/tls.crt
        - -tls-key=/etc/tls/private/tls.key
        - -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
        - -cookie-secret-file=/etc/proxy/secrets/session_secret
        - -openshift-ca=/etc/pki/tls/cert.pem
        - -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        - -skip-auth-regex=^/metrics
        image: registry.redhat.io/openshift3/oauth-proxy:v3.11
        imagePullPolicy: IfNotPresent
        name: prometheus-proxy
        ports:
        - containerPort: 9091
          name: web
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/tls/private
          name: secret-prometheus-k8s-tls
        - mountPath: /etc/proxy/secrets
          name: secret-prometheus-k8s-proxy
        - mountPath: /etc/proxy/htpasswd
          name: secret-prometheus-k8s-htpasswd

I was tunneling the connection through ssh myserver -L9091:prometheus-k8s.openshift-monitoring.svc.cluster.local:9091 (tunneling to service address), and when I access https://localhost:9091/ in the browser I can see the login page with 'Log in with OpenShift' and credentials form. Sending the credentials form gives me the same screen again, though, without any errors. When I press the 'Log in with Openshift' I am redirected to

https://myserver.foo.bar.com:8443/oauth/authorize?approval_prompt=force&amp;client_id=system%3Aserviceaccount%3Aopenshift-monitoring%3Aprometheus-k8s&amp;redirect_uri=https%3A%2F%2Flocalhost%3A9091%2Foauth%2Fcallback&amp;response_type=code&amp;scope=user%3Ainfo+user%3Acheck-access&amp;state=95ec5aa924e0d6c1a773d63d633bfe32%3A%2F

(see that the redirect uri is localhost:9091) which gives me this:

{"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed.","state":"4435a90ee62810bff60c24c9c4a567df:/"}

After adding the server's IP to my /etc/hosts and accessing https://prometheus-k8s-openshift-monitoring.router.default.svc.cluster.local (this is the router address), I can open the login page with 'Log in with OpenShift' and credentials form (see https://imgur.com/a/SyCvPmz ). Sending the credentials form does not work either and gives me the same screen again. Log in with Openshift works on this screen.

From within the cluster (no ssh tunneling), I've tried to do - following the redirects:

> curl -k -v https://prometheus-k8s.openshift-monitoring.svc.cluster.local:9091/oauth/start?rd=/
> curl -k -v "https://myserver.foo.bar.com:8443/oauth/authorize?approval_prompt=force&amp;client_id=system%3Aserviceaccount%3Aopenshift-monitoring%3Aprometheus-k8s&amp;redirect_uri=https%3A%2F%2Fprometheus-k8s.openshift-monitoring.svc.cluster.local%3A9091%2Foauth%2Fcallback&amp;response_type=code&amp;scope=user%3Ainfo+user%3Acheck-access&amp;state=df83fd4eac4cc71a878daeedce33449f%3A%2F"

{"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed.","state":"df83fd4eac4cc71a878daeedce33449f:/"}

The second problem I have is when I try to connect as a service account: Since pre-configured grafana is getting the data from prometheus, I've tried to use its SA for the test:

> oc sa get-token grafana
eyJhbGci...(long hash redacted)...7MErz7KtiA
> curl -k -v https://prometheus-k8s-openshift-monitoring.router.default.svc.cluster.local -H 'Authorization: Bearer eyJhbGci...(long hash redacted)...7MErz7KtiA
<gives me the oauth_proxy login page>
> oc whoami -t
<pretty short hash>
> curl -k -v https://prometheus-k8s-openshift-monitoring.router.default.svc.cluster.local -H 'Authorization: Bearer <pretty short hash>'
works!

So now I don't understand what keys could Grafana be using:

> oc describe sa grafana
... (redacted)
Tokens:              grafana-token-fjrvf
                     grafana-token-sph8s
> oc get secret grafana-token-fjrvf -o custom-columns=:data.token

ZXlKaGJHY2lPaUpTV...(long hash redacted)...jFxdGw3TUVyejdLdGlB
> oc get secret grafana-token-sph8s -o custom-columns=:data.token

ZXlKaGJHY2lPaUpTV...(long hash redacted)...iUE8zM0xjck5FajV3Sk5FN2d3

Note that the above hashes are different from what oc sa get-token grafana gave me but neither of them works when I pass them as the Authorization: Bearer <token>.

@Reamer
Copy link

Reamer commented Oct 31, 2018

I have nearly the same problem. Can not get it to work with oauth-proxy. I have the same error message

https://s-openshift.mycompany.com/oauth/authorize?approval_prompt=force&client_id=system%3Aserviceaccount%3Aopenshift-monitoring%3Aprometheus-k8s&redirect_uri=https%3A%2F%2Fprometheus-xxx.s-apps.cloud.mycompany.com%2Foauth%2Fcallback&response_type=code&scope=user%3Ainfo+user%3Acheck-access&state=4288120771568032b9a4a1fe69160695%3A%2F

{"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed.","state":"df83fd4eac4cc71a878daeedce33449f:/"}

@rvansa
Copy link
Author

rvansa commented Nov 7, 2018

My assumption that Grafana has the needed RBAC was incorrect. In fact it has no rights and when I've accessed it (as admin) it impersonated me accessing Prometheus. That's why the tokens did not work.

The tokens that I get using oc get secret ... like ZXlKaGJHY2lPaUpTV... are base64 encoded, if I want to use those I would have to decode them.

@rvansa rvansa closed this as completed Nov 7, 2018
@rvansa
Copy link
Author

rvansa commented Nov 7, 2018

@Reamer There's a good chance that this is caused by the redirect_uri as well. Try to access it using the url that you get from oc get route prometheus-k8s -n openshift-monitoring. Not sure how to change it if you want to use different domain for access.

@maxberndt1234
Copy link

My assumption that Grafana has the needed RBAC was incorrect. In fact it has no rights and when I've accessed it (as admin) it impersonated me accessing Prometheus. That's why the tokens did not work.

@rvansa Did you get it to work? Did you give Grafana any RBAC? If yes, which and how?
I have a similar problem described here: #106

@aliouba
Copy link

aliouba commented Jul 26, 2019

Using reverse proxy for OpenShift AuthN is very useful but I think integration of standard tools like Grafana or Prometheus federation (Prometheus server target) are missing or not supported yet.

I managed to connect from Grafana to default OpenShift Prometheus deployed by the Operator:

  1. Generate a new user with htpasswd:
 htpasswd -s -c <file> <username>
  1. Encode into base64
echo -n "new user created" | base64
  1. Edit the secret by specifying the base64 encoded message
oc -n openshift-monitoring edit secret prometheus-k8s-htpasswd
  1. Make sure that prometheus PODs use the new secret
  2. Now you can login from your new Grafana to the default OpenShift Prometheus

@ludovicbonivert
Copy link

ludovicbonivert commented Aug 1, 2019

Hi,

I'm encoutering this issue after redeploying all certificates inside my cluster. Then the certs of Grafana/Prometheus were not working anymore. Have solved it with below doc
openshift/openshift-ansible@e24cee2

Now I can access Grafana dashboard and it pulls data from Prometheus, however Prometheus dashboard is still not accessible


 {"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed.","

@aliouba What about the value that's in the secret? Shouldn't we use the htpasswd file that's being used from the oc cluster?
Have been trying your technique and then my Grafana can't pull data anymore from prometheus

@karthik101
Copy link

@aliouba I stumbled upon this issue now, where do i store the htpasswd file ? , i already have a htpasswd file for openshift and configured my authentication with it in master-config.yml.
I still get the error when visiting the grafana route, should i base64 encode the whole user content in the htpasswd file ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants