Skip to content
This repository was archived by the owner on Apr 22, 2020. It is now read-only.

vSphere Cloud Provider: Destroying cluster is flaky #285

Closed
abrarshivani opened this issue Dec 2, 2016 · 3 comments
Closed

vSphere Cloud Provider: Destroying cluster is flaky #285

abrarshivani opened this issue Dec 2, 2016 · 3 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@abrarshivani
Copy link
Contributor

make destroy doesn't always succeed for kubernetes cluster launched using vSphere Cloud Provider. This is due to terraform issue.
Output after running make destroy:

panic: runtime error: index out of range
2016/11/22 00:15:00 [DEBUG] plugin: terraform:
2016/11/22 00:15:00 [DEBUG] plugin: terraform: goroutine 8 [running]:
2016/11/22 00:15:00 [DEBUG] plugin: terraform: panic(0x27f4e60, 0xc42000e0f0)
2016/11/22 00:15:00 [DEBUG] plugin: terraform: 	/opt/go/src/runtime/panic.go:500 +0x1a1
2016/11/22 00:15:00 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/builtin/providers/vsphere.resourceVSphereVirtualMachineRead(0xc42033fe00, 0x2d2bb40, 0xc4203e3030, 0x1, 0x17)
2016/11/22 00:15:00 [DEBUG] plugin: terraform: 	/opt/gopath/src/github.com/hashicorp/terraform/builtin/providers/vsphere/resource_vsphere_virtual_machine.go:1060 +0x3458
2016/11/22 00:15:00 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/helper/schema.(*Resource).Refresh(0xc4203de0c0, 0xc42021b300, 0x2d2bb40, 0xc4203e3030, 0xc42030b280, 0x1, 0x18)
2016/11/22 00:15:00 [DEBUG] plugin: terraform: 	/opt/gopath/src/github.com/hashicorp/terraform/helper/schema/resource.go:259 +0x131
2016/11/22 00:15:00 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/helper/schema.(*Provider).Refresh(0xc4203da570, 0xc42021b2c0, 0xc42021b300, 0x0, 0x18, 0x18)
2016/11/22 00:15:00 [DEBUG] plugin: terraform: 	/opt/gopath/src/github.com/hashicorp/terraform/helper/schema/provider.go:201 +0x91
2016/11/22 00:15:00 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/plugin.(*ResourceProviderServer).Refresh(0xc42030fbc0, 0xc4203d9100, 0xc4203d9730, 0x0, 0x0)
2016/11/22 00:15:00 [DEBUG] plugin: terraform: 	/opt/gopath/src/github.com/hashicorp/terraform/plugin/resource_provider.go:482 +0x4e
2016/11/22 00:15:00 [DEBUG] plugin: terraform: reflect.Value.call(0xc42033f440, 0xc4203c6150, 0x13, 0x2da4800, 0x4, 0xc42054deb0, 0x3, 0x3, 0xc42034e6b0, 0xc4201bd920, ...)
2016/11/22 00:15:00 [DEBUG] plugin: terraform: 	/opt/go/src/reflect/value.go:434 +0x5c8
2016/11/22 00:15:00 [DEBUG] plugin: terraform: reflect.Value.Call(0xc42033f440, 0xc4203c6150, 0x13, 0xc42054deb0, 0x3, 0x3, 0xc42034e6a8, 0xc4201bd800, 0xda2681)
2016/11/22 00:15:00 [DEBUG] plugin: terraform: 	/opt/go/src/reflect/value.go:302 +0xa4
2016/11/22 00:15:00 [DEBUG] plugin: terraform: net/rpc.(*service).call(0xc4203ad800, 0xc4203ad7c0, 0xc4203e2120, 0xc4203c8b00, 0xc4203ea3e0, 0x255f980, 0xc4203d9100, 0x16, 0x255f9c0, 0xc4203d9730, ...)
2016/11/22 00:15:00 [DEBUG] plugin: terraform: 	/opt/go/src/net/rpc/server.go:383 +0x148
2016/11/22 00:15:00 [DEBUG] plugin: terraform: created by net/rpc.(*Server).ServeCodec
2016/11/22 00:15:00 [DEBUG] plugin: terraform: 	/opt/go/src/net/rpc/server.go:477 +0x421
2016/11/22 00:15:00 [ERROR] root: eval: *terraform.EvalRefresh, err: vsphere_virtual_machine.kubevm2: unexpected EOF
2016/11/22 00:15:00 [ERROR] root: eval: *terraform.EvalSequence, err: vsphere_virtual_machine.kubevm2: unexpected EOF
2016/11/22 00:15:00 [ERROR] root: eval: *terraform.EvalOpFilter, err: vsphere_virtual_machine.kubevm2: unexpected EOF
2016/11/22 00:15:00 [ERROR] root: eval: *terraform.EvalSequence, err: vsphere_virtual_machine.kubevm2: unexpected EOF
2016/11/22 00:15:00 [ERROR] root: eval: *terraform.EvalRefresh, err: vsphere_virtual_machine.kubevm1: unexpected EOF
2016/11/22 00:15:00 [TRACE] [walkRefresh] Exiting eval tree: vsphere_virtual_machine.kubevm2
2016/11/22 00:15:00 [ERROR] root: eval: *terraform.EvalSequence, err: vsphere_virtual_machine.kubevm1: unexpected EOF
2016/11/22 00:15:00 [ERROR] root: eval: *terraform.EvalOpFilter, err: vsphere_virtual_machine.kubevm1: unexpected EOF
2016/11/22 00:15:00 [ERROR] root: eval: *terraform.EvalSequence, err: vsphere_virtual_machine.kubevm1: unexpected EOF
2016/11/22 00:15:00 [TRACE] [walkRefresh] Exiting eval tree: vsphere_virtual_machine.kubevm1
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_node, got dep: vsphere_virtual_machine.kubevm2
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_node, got dep: tls_private_key.k8s-test5-master
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_node, got dep: provider.template
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_node, got dep: tls_self_signed_cert.k8s-test5-root
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_node, got dep: vsphere_virtual_machine.kubevm1
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: vsphere_virtual_machine.kubevm1
2016/11/22 00:15:00 [DEBUG] vertex provider.vsphere (close), got dep: vsphere_virtual_machine.kubevm1
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_master, got dep: vsphere_virtual_machine.kubevm1
2016/11/22 00:15:00 [DEBUG] vertex provider.vsphere (close), got dep: vsphere_virtual_machine.kubevm2
2016/11/22 00:15:00 [DEBUG] vertex provider.vsphere (close), got dep: vsphere_folder.cluster_folder
2016/11/22 00:15:00 [DEBUG] vertex data.tls_cert_request.k8s-test5-master, got dep: vsphere_virtual_machine.kubevm1
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: vsphere_virtual_machine.kubevm2
2016/11/22 00:15:00 [DEBUG] vertex data.tls_cert_request.k8s-test5-master, got dep: tls_private_key.k8s-test5-master
2016/11/22 00:15:00 [DEBUG] vertex null_resource.node2, got dep: vsphere_virtual_machine.kubevm1
2016/11/22 00:15:00 [DEBUG] vertex null_resource.node2, got dep: vsphere_virtual_machine.kubevm2
2016/11/22 00:15:00 [DEBUG] vertex tls_locally_signed_cert.k8s-test5-master, got dep: data.tls_cert_request.k8s-test5-master
2016/11/22 00:15:00 [DEBUG] vertex tls_locally_signed_cert.k8s-test5-master, got dep: tls_self_signed_cert.k8s-test5-root
2016/11/22 00:15:00 [DEBUG] vertex tls_locally_signed_cert.k8s-test5-master, got dep: provider.tls
2016/11/22 00:15:00 [DEBUG] vertex provider.tls (close), got dep: tls_locally_signed_cert.k8s-test5-master
2016/11/22 00:15:00 [DEBUG] vertex provider.tls (close), got dep: data.tls_cert_request.k8s-test5-node
2016/11/22 00:15:00 [DEBUG] vertex provider.tls (close), got dep: tls_locally_signed_cert.k8s-test5-admin
2016/11/22 00:15:00 [DEBUG] vertex provider.tls (close), got dep: tls_private_key.k8s-test5-master
2016/11/22 00:15:00 [DEBUG] vertex provider.tls (close), got dep: data.tls_cert_request.k8s-test5-master
2016/11/22 00:15:00 [DEBUG] vertex provider.tls (close), got dep: tls_self_signed_cert.k8s-test5-root
2016/11/22 00:15:00 [DEBUG] vertex provider.tls (close), got dep: tls_locally_signed_cert.k8s-test5-node
2016/11/22 00:15:00 [DEBUG] vertex provider.tls (close), got dep: tls_private_key.k8s-test5-node
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_master, got dep: tls_locally_signed_cert.k8s-test5-master
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_master, got dep: tls_locally_signed_cert.k8s-test5-node
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_master, got dep: vsphere_virtual_machine.kubevm2
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_master, got dep: provider.template
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_master, got dep: tls_self_signed_cert.k8s-test5-root
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: data.template_file.configure_master
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: provider.null
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: provisioner.remote-exec
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: provisioner.local-exec
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: data.template_file.cloudprovider
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: tls_locally_signed_cert.k8s-test5-admin
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: tls_private_key.k8s-test5-admin
2016/11/22 00:15:00 [DEBUG] vertex null_resource.master, got dep: tls_self_signed_cert.k8s-test5-root
2016/11/22 00:15:00 [DEBUG] vertex provisioner.remote-exec (close), got dep: null_resource.master
2016/11/22 00:15:00 [DEBUG] vertex data.template_file.configure_node, got dep: tls_locally_signed_cert.k8s-test5-master
2016/11/22 00:15:00 [DEBUG] vertex provider.template (close), got dep: data.template_file.configure_node
2016/11/22 00:15:00 [DEBUG] vertex provider.template (close), got dep: provider.template
2016/11/22 00:15:00 [DEBUG] vertex provider.template (close), got dep: data.template_file.cloudprovider
2016/11/22 00:15:00 [DEBUG] vertex provider.template (close), got dep: data.template_file.configure_master
2016/11/22 00:15:00 [DEBUG] vertex provider.null (close), got dep: null_resource.master
2016/11/22 00:15:00 [DEBUG] vertex provisioner.local-exec (close), got dep: null_resource.master
2016/11/22 00:15:00 [DEBUG] vertex null_resource.node2, got dep: data.template_file.configure_node
2016/11/22 00:15:00 [DEBUG] vertex null_resource.node2, got dep: data.template_file.cloudprovider
2016/11/22 00:15:00 [DEBUG] vertex null_resource.node2, got dep: provider.null
2016/11/22 00:15:00 [DEBUG] vertex provisioner.remote-exec (close), got dep: null_resource.node2
2016/11/22 00:15:00 [DEBUG] vertex provider.null (close), got dep: null_resource.node2
2016/11/22 00:15:00 [DEBUG] vertex root, got dep: provider.null (close)
2016/11/22 00:15:00 [DEBUG] vertex root, got dep: provider.template (close)
2016/11/22 00:15:00 [DEBUG] vertex root, got dep: provider.vsphere (close)
2016/11/22 00:15:00 [DEBUG] vertex root, got dep: provider.tls (close)
2016/11/22 00:15:00 [DEBUG] vertex root, got dep: provisioner.local-exec (close)
2016/11/22 00:15:00 [DEBUG] vertex root, got dep: provisioner.remote-exec (close)
2016/11/22 00:15:00 [DEBUG] plugin: waiting for all plugin processes to complete...
2016/11/22 00:15:00 [DEBUG] plugin: /bin/terraform: plugin process exited
2016/11/22 00:15:00 [DEBUG] plugin: /bin/terraform: plugin process exited
2016/11/22 00:15:00 [DEBUG] plugin: /bin/terraform: plugin process exited
2016/11/22 00:15:00 [DEBUG] plugin: /bin/terraform: plugin process exited
2016/11/22 00:15:00 [DEBUG] plugin: /bin/terraform: plugin process exited
2016/11/22 00:15:00 [DEBUG] plugin: /bin/terraform: plugin process exited



!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Makefile:53: recipe for target 'do' failed
make[1]: *** [do] Error 1
make[1]: Leaving directory '/opt/kubernetes-anywhere'
Makefile:41: recipe for target 'destroy-cluster' failed
make: *** [destroy-cluster] Error 2

@kerneltime
Copy link

Work around

Terraform fails to destroy VM's and remove the state for existing cluster. 
* Workaround:
  In vSphere Client,
  1. Stop all VM's that are setup by kubernetes-anywhere.
  2. Right-Click on VM and select ```Delete from Disk```.
  3. Run ```make clean```.

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2017
@errordeveloper
Copy link
Contributor

/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants