-
Notifications
You must be signed in to change notification settings - Fork 551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error: unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request #2001
Comments
Update: The issue is gone after I upgrade my cluster from v1.18.12 to v1.19.7. Don't know what causes the issue, the Kubernetes version or Hyperkube
Feel free to close this issue, if need more information, I will be very glad to help. Thanks |
Thanks @judexzhu , I will close since it looks like your issue has been resolved. Packageserver, like any APIService, can have intermittent connectivity issues with the api-server (or the api-server is not configured to handle APIServices correctly). It should recover but it some cases it may not. May be hyperkube related. |
I am experiencing the same issue on a k3s cluster. It's blocking me from deleting namespaces
running |
I have the same issue as the person above me. This is really annoying. Can this be fixed? |
Thank guys, thanks |
Thank you @SoMuchForSubtlety you saved my day! |
@SoMuchForSubtlety @gisyrus had a quiet similar issue, the problem was that I have not opened the ports for kubeapi to reache OLM packageserver at port 5443. May a not to harmfull solution as deleting the apiservice. |
This seems to be the only correct solution. |
I didn't have any issues with ports like @romulus-ai. Instead, the cause of this error:
Was because everything in the
I believe the permanent fix for this is to separate the |
For us, the certificate for package server had expired. The solution was to delete the secret, delete the relevant pods, and wait until the secret and pods get recreated. Source: https://access.redhat.com/solutions/6999798 Commands: # OpenShift Environment (can be compatible with kubectl / Kubernetes relevant commands)
# Delete the certs
oc delete secret catalog-operator-serving-cert olm-operator-serving-cert packageserver-service-cert -n openshift-operator-lifecycle-manager
# Delete the relevant pods
oc delete pod -l 'app in (catalog-operator, olm-operator, packageserver, package-server-manager)' -n openshift-operator-lifecycle-manager
# Wait and keep checking
oc get pods -n openshift-operator-lifecycle-manager
# Repeat 🔁 until all `Running` or `Completed`
# Delete the old API once all pods are healthy
oc delete apiservice v1.packages.operators.coreos.com
# Verify certificate
oc get apiservice v1.packages.operators.coreos.com -o jsonpath='{.spec.caBundle}' | base64 -d | openssl x509 -noout -text |
just a friendly reminder, make sure that your deployment "packageserver" is running. |
We are unable to delete |
Bug Report
What did you do?
What did you expect to see?
clusterserviceversion.operators.coreos.com/packageserver success installed and packages.operators.coreos.com/v1 works
What did you see instead? Under which circumstances?
from API-Server logs
repeating
Environment
0.17.0
Vanilla Kubernetes deploy on baremetal via Ansible using hyperkube on Flatcar Linux OS
Possible Solution
N/A, tried the 0.16.1 install.sh, same thing
Additional context
The text was updated successfully, but these errors were encountered: