Skip to content

Sort actions to ensure proper call order #95

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Nov 2, 2017
Merged

Sort actions to ensure proper call order #95

merged 4 commits into from
Nov 2, 2017

Conversation

jhorwit2
Copy link
Member

@jhorwit2 jhorwit2 commented Nov 1, 2017

Fixes #92

Copy link
Contributor

@prydie prydie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Panics when changing frontend port (examples/echo-svc.yaml changing port: 80 -> port: 81):

I1101 17:31:23.029132       1 service_controller.go:286] Ensuring LB for service default/echoheaders
I1101 17:31:23.029377       1 load_balancer.go:167] Ensure load balancer '7af17ecf-bf2a-11e7-a411-00001701ce75' called for 'echoheaders' with 2 nodes.
I1101 17:31:23.070765       1 load_balancer.go:173] Attempting to create a load balancer with name '7af17ecf-bf2a-11e7-a411-00001701ce75'
I1101 17:31:23.263523       1 client.go:367] Polling WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaauqsooqt32nviwlxagmjsiphdmkwpjz4yr3waud4c2pvugcbgy3ma"...
I1101 17:31:23.542196       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaauqsooqt32nviwlxagmjsiphdmkwpjz4yr3waud4c2pvugcbgy3ma" state: 'ACCEPTED'
I1101 17:31:24.001978       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:25.874523       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaauqsooqt32nviwlxagmjsiphdmkwpjz4yr3waud4c2pvugcbgy3ma" state: 'ACCEPTED'
I1101 17:31:26.005894       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:28.009857       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:28.785026       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaauqsooqt32nviwlxagmjsiphdmkwpjz4yr3waud4c2pvugcbgy3ma" state: 'IN_PROGRESS'
I1101 17:31:30.013554       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:32.017013       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:32.286854       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaauqsooqt32nviwlxagmjsiphdmkwpjz4yr3waud4c2pvugcbgy3ma" state: 'IN_PROGRESS'
I1101 17:31:33.956120       1 reflector.go:286] github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/client-go/informers/factory.go:73: forcing resync
I1101 17:31:34.020831       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:36.024408       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:36.558754       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaauqsooqt32nviwlxagmjsiphdmkwpjz4yr3waud4c2pvugcbgy3ma" state: 'IN_PROGRESS'
I1101 17:31:38.028393       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:40.032136       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:41.890964       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaauqsooqt32nviwlxagmjsiphdmkwpjz4yr3waud4c2pvugcbgy3ma" state: 'IN_PROGRESS'
I1101 17:31:42.035654       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:44.039389       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:46.043304       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:48.047338       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:48.592343       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaauqsooqt32nviwlxagmjsiphdmkwpjz4yr3waud4c2pvugcbgy3ma" state: 'IN_PROGRESS'
I1101 17:31:50.050706       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:52.054320       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:54.059306       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:56.063279       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:57.162603       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaauqsooqt32nviwlxagmjsiphdmkwpjz4yr3waud4c2pvugcbgy3ma" state: 'SUCCEEDED'
I1101 17:31:57.497157       1 load_balancer.go:180] Created load balancer '7af17ecf-bf2a-11e7-a411-00001701ce75' with OCID 'ocid1.loadbalancer.oc1.phx.aaaaaaaa6a75fykg2jw5xdfvvm4rnwfjxh2rh6zntvncenobsbjgowstlega'
I1101 17:31:57.497187       1 load_balancer_spec.go:156] No SSL enabled ports found for service "echoheaders"
I1101 17:31:57.616618       1 load_balancer.go:274] Applying `%!s(func() oci.ActionType=0x10a5ea0)` action on backend set `TCP-80` for lb `ocid1.loadbalancer.oc1.phx.aaaaaaaa6a75fykg2jw5xdfvvm4rnwfjxh2rh6zntvncenobsbjgowstlega`
I1101 17:31:57.795533       1 load_balancer_security_lists.go:137] No changes for lb subnet security list `ocid1.securitylist.oc1.phx.aaaaaaaak44nwlvzh3yon7j3pyoxjd45pef7jfcmv24djubpj3n6p4x7pywa`
I1101 17:31:58.067719       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:31:58.281759       1 client.go:367] Polling WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaafws227j3h67lihipxqssvcilotqu3rhnou6kww6yrbjoryms5c7a"...
I1101 17:31:58.856012       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaafws227j3h67lihipxqssvcilotqu3rhnou6kww6yrbjoryms5c7a" state: 'ACCEPTED'
I1101 17:32:00.072831       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:01.439276       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaafws227j3h67lihipxqssvcilotqu3rhnou6kww6yrbjoryms5c7a" state: 'ACCEPTED'
I1101 17:32:02.076432       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:03.956316       1 reflector.go:286] github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/client-go/informers/factory.go:73: forcing resync
I1101 17:32:04.079870       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:04.482778       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaafws227j3h67lihipxqssvcilotqu3rhnou6kww6yrbjoryms5c7a" state: 'ACCEPTED'
I1101 17:32:06.083558       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:08.086901       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:08.384209       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaafws227j3h67lihipxqssvcilotqu3rhnou6kww6yrbjoryms5c7a" state: 'IN_PROGRESS'
I1101 17:32:10.090559       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:12.095907       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:12.936091       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaafws227j3h67lihipxqssvcilotqu3rhnou6kww6yrbjoryms5c7a" state: 'IN_PROGRESS'
I1101 17:32:14.099596       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:16.103669       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:18.107164       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:18.730115       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaafws227j3h67lihipxqssvcilotqu3rhnou6kww6yrbjoryms5c7a" state: 'SUCCEEDED'
I1101 17:32:18.730158       1 load_balancer.go:343] Applying `%!s(func() oci.ActionType=0x10a5ec0)` action on listener `TCP-80` for lb `ocid1.loadbalancer.oc1.phx.aaaaaaaa6a75fykg2jw5xdfvvm4rnwfjxh2rh6zntvncenobsbjgowstlega`
I1101 17:32:18.730453       1 load_balancer_security_lists.go:137] No changes for lb subnet security list `ocid1.securitylist.oc1.phx.aaaaaaaak44nwlvzh3yon7j3pyoxjd45pef7jfcmv24djubpj3n6p4x7pywa`
I1101 17:32:18.730720       1 load_balancer_security_lists.go:137] No changes for lb subnet security list `ocid1.securitylist.oc1.phx.aaaaaaaak44nwlvzh3yon7j3pyoxjd45pef7jfcmv24djubpj3n6p4x7pywa`
I1101 17:32:18.730997       1 load_balancer_security_lists.go:107] No changes for node subnet security list `ocid1.securitylist.oc1.phx.aaaaaaaak44nwlvzh3yon7j3pyoxjd45pef7jfcmv24djubpj3n6p4x7pywa`
I1101 17:32:19.105654       1 client.go:367] Polling WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaamqktn5ezgdjbsrmtr2i4i4r4liblxtcdpuamx42dgzcygmk6s3cq"...
I1101 17:32:19.492291       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaamqktn5ezgdjbsrmtr2i4i4r4liblxtcdpuamx42dgzcygmk6s3cq" state: 'ACCEPTED'
I1101 17:32:20.110950       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:22.058073       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaamqktn5ezgdjbsrmtr2i4i4r4liblxtcdpuamx42dgzcygmk6s3cq" state: 'ACCEPTED'
I1101 17:32:22.115377       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:24.118825       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:25.124074       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaamqktn5ezgdjbsrmtr2i4i4r4liblxtcdpuamx42dgzcygmk6s3cq" state: 'IN_PROGRESS'
I1101 17:32:26.122956       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:28.126418       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:28.704666       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaamqktn5ezgdjbsrmtr2i4i4r4liblxtcdpuamx42dgzcygmk6s3cq" state: 'IN_PROGRESS'
I1101 17:32:30.130178       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:32.134280       1 leaderelection.go:199] successfully renewed lease kube-system/cloud-controller-manager
I1101 17:32:33.071922       1 client.go:377] WorkRequest "ocid1.loadbalancerworkrequest.oc1.phx.aaaaaaaamqktn5ezgdjbsrmtr2i4i4r4liblxtcdpuamx42dgzcygmk6s3cq" state: 'SUCCEEDED'
I1101 17:32:33.071957       1 load_balancer.go:217] Successfully ensured load balancer "7af17ecf-bf2a-11e7-a411-00001701ce75"
I1101 17:32:33.075815       1 service_controller.go:723] Finished syncing service "default/echoheaders" (1m10.046698754s)
<snip>
I1101 17:35:23.326726       1 service_controller.go:286] Ensuring LB for service default/echoheaders
I1101 17:35:23.327281       1 load_balancer.go:167] Ensure load balancer '7af17ecf-bf2a-11e7-a411-00001701ce75' called for 'echoheaders' with 2 nodes.
I1101 17:35:23.353133       1 load_balancer_spec.go:156] No SSL enabled ports found for service "echoheaders"
I1101 17:35:23.353196       1 service_controller.go:723] Finished syncing service "default/echoheaders" (26.504199ms)
E1101 17:35:23.353359       1 runtime.go:66] Observed a panic: "index out of range" (runtime error: index out of range)
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:509
/usr/local/go/src/runtime/panic.go:491
/usr/local/go/src/runtime/panic.go:28
/go/src/github.com/oracle/oci-cloud-controller-manager/pkg/oci/load_balancer_security_lists.go:205
/go/src/github.com/oracle/oci-cloud-controller-manager/pkg/oci/load_balancer.go:341
/go/src/github.com/oracle/oci-cloud-controller-manager/pkg/oci/load_balancer.go:257
/go/src/github.com/oracle/oci-cloud-controller-manager/pkg/oci/load_balancer.go:207
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:354
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:291
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:236
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:744
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:200
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:204
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:182
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:2337
panic: runtime error: index out of range [recovered]
        panic: runtime error: index out of range
goroutine 97 [running]:
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x111
panic(0x1201ca0, 0x1d29360)
        /usr/local/go/src/runtime/panic.go:491 +0x283
github.com/oracle/oci-cloud-controller-manager/pkg/oci.getBackendPort(...)
        /go/src/github.com/oracle/oci-cloud-controller-manager/pkg/oci/load_balancer_security_lists.go:205
github.com/oracle/oci-cloud-controller-manager/pkg/oci.(*CloudProvider).updateListener(0xc420140140, 0xc420b40000, 0x57, 0xc420afb440, 0x24, 0xc4205f04f0, 0x7, 0xc42074c1e0, 0xc4202b6fc0, 0x2, ...)
        /go/src/github.com/oracle/oci-cloud-controller-manager/pkg/oci/load_balancer.go:341 +0x875
github.com/oracle/oci-cloud-controller-manager/pkg/oci.(*CloudProvider).updateLoadBalancer(0xc420140140, 0xc420b0a8c0, 0xc420afb440, 0x24, 0xc4205f04f0, 0x7, 0xc42074c1e0, 0xc4202b6fc0, 0x2, 0x2, ...)
        /go/src/github.com/oracle/oci-cloud-controller-manager/pkg/oci/load_balancer.go:257 +0x602
github.com/oracle/oci-cloud-controller-manager/pkg/oci.(*CloudProvider).EnsureLoadBalancer(0xc420140140, 0x138b330, 0xa, 0xc42074c1e0, 0xc4202b6fc0, 0x2, 0x2, 0x14, 0x13969f4, 0x16)
        /go/src/github.com/oracle/oci-cloud-controller-manager/pkg/oci/load_balancer.go:207 +0x77d
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service.(*ServiceController).ensureLoadBalancer(0xc42017e380, 0xc42074c1e0, 0xc42074c1e0, 0x13866f6, 0x6)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:354 +0xcc
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service.(*ServiceController).createLoadBalancerIfNeeded(0xc42017e380, 0xc4203a4f20, 0x13, 0xc42074c1e0, 0xc4209bdc01, 0x103a662, 0xc420245710)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:291 +0x20e
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service.(*ServiceController).processServiceUpdate(0xc42017e380, 0xc42020c540, 0xc42074c1e0, 0xc4203a4f20, 0x13, 0x0, 0x0, 0x0)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:236 +0xe5
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service.(*ServiceController).syncService(0xc42017e380, 0xc4203a4f20, 0x13, 0x0, 0x0)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:744 +0x3aa
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service.(*ServiceController).worker.func1(0xc42017e380)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:200 +0xd9
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service.(*ServiceController).worker(0xc42017e380)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:204 +0x2b
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service.(*ServiceController).(github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service.worker)-fm()
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:182 +0x2a
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4201ae520)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x5e
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc4201ae520, 0x3b9aca00, 0x0, 0x1, 0xc42028a0c0)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4201ae520, 0x3b9aca00, 0xc42028a0c0)
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service.(*ServiceController).Run
        /go/src/github.com/oracle/oci-cloud-controller-manager/vendor/k8s.io/kubernetes/pkg/controller/service/service_controller.go:182 +0x208

if err != nil {
return err
return fmt.Errorf("error updating BackendSet: %v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/BackendSet/Listener/

@prydie prydie added this to the 0.1.1 milestone Nov 1, 2017
@jhorwit2
Copy link
Member Author

jhorwit2 commented Nov 2, 2017

@prydie I pushed a fix for the panic / delete issue and added changing the service port & node port to the integration test (really need to make that better soon). They both can be changed now w/o issues.

@prydie prydie changed the title [WIP] Sort actions to ensure proper call order Sort actions to ensure proper call order Nov 2, 2017
@prydie prydie merged commit b9ba323 into master Nov 2, 2017
@prydie prydie deleted the jah/fixes-93 branch November 2, 2017 10:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants