-
Notifications
You must be signed in to change notification settings - Fork 144
Authentication with default prometheus in 3.11 #92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
So I've figured out that
Now, trying to get in 'as service' works when I do And one more question: where do I set the list of approved callbacks? I don't see anything explicit in oauth_proxy arguments as seen in |
@rvansa can you post your oauth-proxy pod configuration and some more details on the exact steps you are taking to log in? |
@mrogers950 This is the proxy container configuration, the prometheus is running in the same pod, having 9090 open: - args:
- -provider=openshift
- -https-address=:9091
- -http-address=
- -email-domain=*
- -upstream=http://localhost:9090
- -htpasswd-file=/etc/proxy/htpasswd/auth
- -openshift-service-account=prometheus-k8s
- '-openshift-sar={"resource": "namespaces", "verb": "get"}'
- '-openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}'
- -tls-cert=/etc/tls/private/tls.crt
- -tls-key=/etc/tls/private/tls.key
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-secret-file=/etc/proxy/secrets/session_secret
- -openshift-ca=/etc/pki/tls/cert.pem
- -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- -skip-auth-regex=^/metrics
image: registry.redhat.io/openshift3/oauth-proxy:v3.11
imagePullPolicy: IfNotPresent
name: prometheus-proxy
ports:
- containerPort: 9091
name: web
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/tls/private
name: secret-prometheus-k8s-tls
- mountPath: /etc/proxy/secrets
name: secret-prometheus-k8s-proxy
- mountPath: /etc/proxy/htpasswd
name: secret-prometheus-k8s-htpasswd I was tunneling the connection through
(see that the redirect uri is localhost:9091) which gives me this: {"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed.","state":"4435a90ee62810bff60c24c9c4a567df:/"} After adding the server's IP to my /etc/hosts and accessing From within the cluster (no ssh tunneling), I've tried to do - following the redirects:
The second problem I have is when I try to connect as a service account: Since pre-configured grafana is getting the data from prometheus, I've tried to use its SA for the test:
So now I don't understand what keys could Grafana be using:
Note that the above hashes are different from what |
I have nearly the same problem. Can not get it to work with oauth-proxy. I have the same error message
|
My assumption that Grafana has the needed RBAC was incorrect. In fact it has no rights and when I've accessed it (as admin) it impersonated me accessing Prometheus. That's why the tokens did not work. The tokens that I get using |
@Reamer There's a good chance that this is caused by the |
@rvansa Did you get it to work? Did you give Grafana any RBAC? If yes, which and how? |
Using reverse proxy for OpenShift AuthN is very useful but I think integration of standard tools like Grafana or Prometheus federation (Prometheus server target) are missing or not supported yet. I managed to connect from Grafana to default OpenShift Prometheus deployed by the Operator:
|
Hi, I'm encoutering this issue after redeploying all certificates inside my cluster. Then the certs of Grafana/Prometheus were not working anymore. Have solved it with below doc Now I can access Grafana dashboard and it pulls data from Prometheus, however Prometheus dashboard is still not accessible
@aliouba What about the value that's in the secret? Shouldn't we use the htpasswd file that's being used from the oc cluster? |
@aliouba I stumbled upon this issue now, where do i store the htpasswd file ? , i already have a htpasswd file for openshift and configured my authentication with it in master-config.yml. |
Uh oh!
There was an error while loading. Please reload this page.
I have a fresh install of Openshift Enterprise 3.11 which secures access to a Prometheus instance with oauth2_proxy, in prometheus-k8s-X/prometheus-proxy. I am trying to login in with credentials for admin (with cluster-admin role) but upon sending those I just get HTTP response 200 and the same login screen. The log shows me that I wasn't allowed:
But when I have enabled audit log on the master API it seems that I should be:
Where is the problem?
The text was updated successfully, but these errors were encountered: