Skip to content

[Failing Test] ci-kubernetes-e2e-autoscaling-vpa-full UnexpectedServerResponse 404 page not found #7946

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
pacoxu opened this issue Mar 19, 2025 · 12 comments

Comments

@pacoxu
Copy link
Member

pacoxu commented Mar 19, 2025

Failing job

ci-kubernetes-e2e-autoscaling-vpa-full

Test Grid

https://testgrid.k8s.io/sig-autoscaling-vpa#autoscaling-vpa-full

Failing started

03-18/03-19

Failing Test

Pods under VPA with default recommender explicitly configured

Failing message

[sig-autoscaling] [VPA] [full-vpa] [v1] Pods under VPA with default recommender explicitly configured [BeforeEach] have cpu requests growing with usage
  [BeforeEach] /home/prow/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/v1/full_vpa.go:148
  [It] /home/prow/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/v1/full_vpa.go:187
  [FAILED] unexpected error creating VPA
  Unexpected error:
      <*errors.StatusError | 0xc0009df040>: 
      the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
      {
          ErrStatus: 
              code: 404
              details:
                causes:
                - message: 404 page not found
                  reason: UnexpectedServerResponse
                group: autoscaling.k8s.io
                kind: verticalpodautoscalers
              message: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
              metadata: {}
              reason: NotFound
              status: Failure,
      }
  occurred
  In [BeforeEach] at: /home/prow/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/v1/common.go:309 @ 03/19/25 07:15:31.173
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
@voelzmo
Copy link
Contributor

voelzmo commented Mar 19, 2025

Hey @pacoxu thanks for the ping! Looking at the last successful jobs, I'm a bit confused what might have broken that test.
I see successful runs for the same commit (commit 9937f8f30896ce838d78b24ab0614c9b0152b113), e.g. https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-autoscaling-vpa-full/1902070177697107968 and https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-autoscaling-vpa-full/1902160775254904832

Did something else change or is this a test setup issue?

@pacoxu
Copy link
Member Author

pacoxu commented Mar 19, 2025

The failure is during VPA installation.

_, err := vpaClientSet.AutoscalingV1().VerticalPodAutoscalers(f.Namespace.Name).Create(context.TODO(), vpa, metav1.CreateOptions{})
gomega.Expect(err).NotTo(gomega.HaveOccurred(), "unexpected error creating VPA")

The create request got a 404 response which is weird.

@pacoxu
Copy link
Member Author

pacoxu commented Mar 19, 2025

Just found that https://testgrid.k8s.io/sig-autoscaling-vpa#autoscaling-vpa-actuation failed for the same reason.

@voelzmo
Copy link
Contributor

voelzmo commented Mar 19, 2025

I'm just wondering why this fails all of a sudden so regularly when the codebase and tests haven't changed – do we have an idea why a modification to the tests would be necessary? What else changed that made this fail?

@adrianmoisey
Copy link
Member

/area vertical-pod-autoscaler

@raywainman
Copy link
Contributor

raywainman commented Mar 19, 2025

I think this is the issue:

failed to mount blob k8s-infra-e2e-boskos-154/vpa-recommender-amd64@sha256:d858cbc252ade14879807ff8dbc3043a26bbdb92087da98cda831ee040b172b3 to gcr.io/k8s-infra-e2e-boskos-154/vpa-recommender:latest: Head "https://gcr.io/v2/k8s-infra-e2e-boskos-154/vpa-recommender/blobs/sha256:d858cbc252ade14879807ff8dbc3043a26bbdb92087da98cda831ee040b172b3": unknown: Container Registry is deprecated and shutting down, please use the auto migration tool to migrate to Artifact Registry (gcloud artifacts docker upgrade migrate --projects='k8s-infra-e2e-boskos-154'). For more details see: https://cloud.google.com/artifact-registry/docs/transition/auto-migrate-gcr-ar

GCR stopped accepting writes yesterday I believe... I thought we migrated all of this stuff to Artifact Registry but perhaps something in our tests was left behind?

@raywainman
Copy link
Contributor

I think it is coming from here:

export REGISTRY=gcr.io/`gcloud config get-value core/project`

@raywainman
Copy link
Contributor

Posted a question in #sig-k8s-infra: https://kubernetes.slack.com/archives/CCK68P2Q2/p1742396477837669

@ameukam
Copy link
Member

ameukam commented Mar 19, 2025

xRef: kubernetes/k8s.io#1343

@raywainman
Copy link
Contributor

Hmm - I don't understand why this is blocking our PR submits, the tests are listed as optional: https://github.com/uroy-personal/test-infra/blob/master/config/jobs/kubernetes/sig-autoscaling/sig-autoscaling-presubmits.yaml#L165.

@upodroid
Copy link
Member

This has been fixed now. Please reenable this job

@raywainman
Copy link
Contributor

Job has been reverted, tests are running now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants