Skip to content

docker driver: add support for btrfs #7923

Closed
@solarnz

Description

@solarnz

Steps to reproduce the issue:

  1. Install minikube and kubeadm
  2. run minikube start --driver=docker --v=5 --alsologtostderr

I'm at a loss as to how to proceed any further here, I'm not sure if this is related to my system configuration or if it's a bug in minikube.

Full output of failed command:

% minikube start --driver=docker  --v=5 --alsologtostderr
I0428 17:10:22.855774   33446 start.go:100] hostinfo: {"hostname":"chris-trotman-laptop","uptime":13754,"bootTime":1588044068,"procs":307,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"","kernelVersion":"5.6.7-arch1-1","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"a4ab05cf-cb22-4c07-88c2-9bf79092f646"}
I0428 17:10:22.856368   33446 start.go:110] virtualization: kvm host
😄  minikube v1.10.0-beta.1 on Arch 
I0428 17:10:22.856514   33446 driver.go:255] Setting default libvirt URI to qemu:///system
I0428 17:10:22.856593   33446 notify.go:125] Checking for updates...
✨  Using the docker driver based on user configuration
I0428 17:10:22.909172   33446 start.go:207] selected driver: docker
I0428 17:10:22.909191   33446 start.go:580] validating driver "docker" against <nil>
I0428 17:10:22.909211   33446 start.go:586] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0428 17:10:22.909233   33446 start.go:899] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0428 17:10:22.909287   33446 start_flags.go:215] no existing cluster config was found, will generate one from the flags 
I0428 17:10:22.995065   33446 start_flags.go:229] Using suggested 3900MB memory alloc based on sys=15898MB, container=15898MB
I0428 17:10:22.995202   33446 start_flags.go:551] Wait components to verify : map[apiserver:true system_pods:true]
👍  Starting control plane node minikube in cluster minikube
I0428 17:10:22.995308   33446 cache.go:103] Beginning downloading kic artifacts
I0428 17:10:23.122024   33446 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.9@sha256:82a826cc03c3e59ead5969b8020ca138de98f366c1907293df91fc57205dbb53 in local docker daemon, skipping pull
I0428 17:10:23.122081   33446 preload.go:82] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0428 17:10:23.122130   33446 preload.go:97] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0428 17:10:23.122140   33446 cache.go:47] Caching tarball of preloaded images
I0428 17:10:23.122163   33446 preload.go:123] Found /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0428 17:10:23.122173   33446 cache.go:50] Finished verifying existence of preloaded tar for  v1.18.0 on docker
I0428 17:10:23.122395   33446 profile.go:149] Saving config to /home/chris/.minikube/profiles/minikube/config.json ...
I0428 17:10:23.122477   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/config.json: {Name:mk450fd4eda337c7ddd64ef0cf55f5d70f3fb5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:23.122754   33446 cache.go:120] Successfully downloaded all kic artifacts
I0428 17:10:23.122795   33446 start.go:221] acquiring machines lock for minikube: {Name:mkec809913d626154fe8c3badcd878ae0c8a6125 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0428 17:10:23.122847   33446 start.go:225] acquired machines lock for "minikube" in 38.051µs
I0428 17:10:23.122882   33446 start.go:81] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0428 17:10:23.122920   33446 start.go:102] createHost starting for "" (driver="docker")
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
I0428 17:10:23.123177   33446 start.go:138] libmachine.API.Create for "minikube" (driver="docker")
I0428 17:10:23.123201   33446 client.go:161] LocalClient.Create starting
I0428 17:10:23.123254   33446 main.go:110] libmachine: Reading certificate data from /home/chris/.minikube/certs/ca.pem
I0428 17:10:23.123287   33446 main.go:110] libmachine: Decoding PEM data...
I0428 17:10:23.123312   33446 main.go:110] libmachine: Parsing certificate...
I0428 17:10:23.123460   33446 main.go:110] libmachine: Reading certificate data from /home/chris/.minikube/certs/cert.pem
I0428 17:10:23.123493   33446 main.go:110] libmachine: Decoding PEM data...
I0428 17:10:23.123547   33446 main.go:110] libmachine: Parsing certificate...
I0428 17:10:23.123842   33446 oci.go:268] executing with [docker ps -a --format {{.Names}}] timeout: 30s
I0428 17:10:23.180487   33446 volumes.go:97] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true]
I0428 17:10:23.266914   33446 oci.go:103] Successfully created a docker volume minikube
I0428 17:10:23.267183   33446 preload.go:82] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0428 17:10:23.267306   33446 preload.go:97] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0428 17:10:23.267338   33446 kic.go:133] Starting extracting preloaded images to volume ...
I0428 17:10:23.267464   33446 volumes.go:85] executing: [docker run --rm --entrypoint /usr/bin/tar -v /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.9@sha256:82a826cc03c3e59ead5969b8020ca138de98f366c1907293df91fc57205dbb53 -I lz4 -xvf /preloaded.tar -C /extractDir]
I0428 17:10:27.651134   33446 oci.go:268] executing with [docker inspect minikube --format={{.State.Status}}] timeout: 19s
I0428 17:10:27.741315   33446 oci.go:178] the created container "minikube" has a running status.
I0428 17:10:27.741379   33446 kic.go:157] Creating ssh key for kic: /home/chris/.minikube/machines/minikube/id_rsa...
I0428 17:10:27.994559   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0428 17:10:27.994623   33446 kic_runner.go:174] docker (temp): /home/chris/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0428 17:10:28.236449   33446 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0428 17:10:32.508265   33446 kic.go:138] duration metric: took 9.240928 seconds to extract preloaded images to volume
I0428 17:10:32.508326   33446 oci.go:268] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0428 17:10:32.558839   33446 machine.go:86] provisioning docker machine ...
I0428 17:10:32.558887   33446 ubuntu.go:166] provisioning hostname "minikube"
I0428 17:10:32.609446   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:32.609830   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:32.609857   33446 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0428 17:10:32.762640   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0428 17:10:32.816327   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:32.816516   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:32.816551   33446 main.go:110] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0428 17:10:32.948688   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0428 17:10:32.948810   33446 ubuntu.go:172] set auth options {CertDir:/home/chris/.minikube CaCertPath:/home/chris/.minikube/certs/ca.pem CaPrivateKeyPath:/home/chris/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/chris/.minikube/machines/server.pem ServerKeyPath:/home/chris/.minikube/machines/server-key.pem ClientKeyPath:/home/chris/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/chris/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/chris/.minikube}
I0428 17:10:32.948948   33446 ubuntu.go:174] setting up certificates
I0428 17:10:32.948995   33446 provision.go:82] configureAuth start
I0428 17:10:33.014245   33446 provision.go:131] copyHostCerts
I0428 17:10:33.014282   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/ca.pem -> /home/chris/.minikube/ca.pem
I0428 17:10:33.014312   33446 exec_runner.go:91] found /home/chris/.minikube/ca.pem, removing ...
I0428 17:10:33.014435   33446 exec_runner.go:98] cp: /home/chris/.minikube/certs/ca.pem --> /home/chris/.minikube/ca.pem (1034 bytes)
I0428 17:10:33.014515   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/cert.pem -> /home/chris/.minikube/cert.pem
I0428 17:10:33.014542   33446 exec_runner.go:91] found /home/chris/.minikube/cert.pem, removing ...
I0428 17:10:33.014590   33446 exec_runner.go:98] cp: /home/chris/.minikube/certs/cert.pem --> /home/chris/.minikube/cert.pem (1074 bytes)
I0428 17:10:33.014653   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/key.pem -> /home/chris/.minikube/key.pem
I0428 17:10:33.014678   33446 exec_runner.go:91] found /home/chris/.minikube/key.pem, removing ...
I0428 17:10:33.014722   33446 exec_runner.go:98] cp: /home/chris/.minikube/certs/key.pem --> /home/chris/.minikube/key.pem (1679 bytes)
I0428 17:10:33.014801   33446 provision.go:105] generating server cert: /home/chris/.minikube/machines/server.pem ca-key=/home/chris/.minikube/certs/ca.pem private-key=/home/chris/.minikube/certs/ca-key.pem org=chris.minikube san=[10.255.0.3 localhost 127.0.0.1]
I0428 17:10:33.084003   33446 provision.go:159] copyRemoteCerts
I0428 17:10:33.084085   33446 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0428 17:10:33.130999   33446 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0428 17:10:33.244967   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/server.pem -> /etc/docker/server.pem
I0428 17:10:33.245140   33446 ssh_runner.go:215] scp /home/chris/.minikube/machines/server.pem --> /etc/docker/server.pem (1115 bytes)
I0428 17:10:33.276875   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0428 17:10:33.276931   33446 ssh_runner.go:215] scp /home/chris/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0428 17:10:33.298085   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0428 17:10:33.298161   33446 ssh_runner.go:215] scp /home/chris/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1034 bytes)
I0428 17:10:33.320892   33446 provision.go:85] duration metric: configureAuth took 371.858734ms
I0428 17:10:33.320922   33446 ubuntu.go:190] setting minikube options for container-runtime
I0428 17:10:33.369324   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:33.369497   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:33.369518   33446 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0428 17:10:33.527605   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: btrfs

I0428 17:10:33.527695   33446 ubuntu.go:71] root file system type: btrfs
I0428 17:10:33.528096   33446 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0428 17:10:33.596778   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:33.596949   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:33.597039   33446 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0428 17:10:33.750658   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0428 17:10:33.810214   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:33.810394   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:33.810430   33446 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0428 17:10:36.519422   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2020-04-28 07:10:33.742393098 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0428 17:10:36.519585   33446 machine.go:89] provisioned docker machine in 3.96071047s
I0428 17:10:36.519601   33446 client.go:164] LocalClient.Create took 13.396378414s
I0428 17:10:36.519619   33446 start.go:143] duration metric: libmachine.API.Create for "minikube" took 13.396440623s
I0428 17:10:36.519630   33446 start.go:184] post-start starting for "minikube" (driver="docker")
I0428 17:10:36.519641   33446 start.go:194] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0428 17:10:36.519702   33446 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0428 17:10:36.581121   33446 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0428 17:10:36.672172   33446 ssh_runner.go:148] Run: cat /etc/os-release
I0428 17:10:36.674892   33446 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0428 17:10:36.674932   33446 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0428 17:10:36.674953   33446 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0428 17:10:36.674964   33446 info.go:96] Remote host: Ubuntu 19.10
I0428 17:10:36.674978   33446 filesync.go:118] Scanning /home/chris/.minikube/addons for local assets ...
I0428 17:10:36.675034   33446 filesync.go:118] Scanning /home/chris/.minikube/files for local assets ...
I0428 17:10:36.675068   33446 start.go:187] post-start completed in 155.42656ms
I0428 17:10:36.675444   33446 start.go:105] duration metric: createHost completed in 13.552513924s
I0428 17:10:36.675462   33446 start.go:72] releasing machines lock for "minikube", held for 13.552587491s
I0428 17:10:36.731777   33446 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0428 17:10:36.731776   33446 profile.go:149] Saving config to /home/chris/.minikube/profiles/minikube/config.json ...
I0428 17:10:36.732291   33446 ssh_runner.go:148] Run: systemctl --version
I0428 17:10:36.797057   33446 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0428 17:10:36.800151   33446 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0428 17:10:36.882685   33446 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0428 17:10:36.896218   33446 cruntime.go:185] skipping containerd shutdown because we are bound to it
I0428 17:10:36.896303   33446 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0428 17:10:36.918579   33446 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0428 17:10:37.018551   33446 ssh_runner.go:148] Run: sudo systemctl start docker
I0428 17:10:37.029555   33446 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
I0428 17:10:37.370768   33446 preload.go:82] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0428 17:10:37.370902   33446 preload.go:97] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0428 17:10:37.371054   33446 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0428 17:10:37.490675   33446 docker.go:356] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0428 17:10:37.490775   33446 docker.go:294] Images already preloaded, skipping extraction
I0428 17:10:37.490879   33446 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0428 17:10:37.592905   33446 docker.go:356] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0428 17:10:37.592944   33446 cache_images.go:69] Images are preloaded, skipping loading
I0428 17:10:37.592991   33446 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.255.0.3 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.255.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:10.255.0.3 ControlPlaneAddress:10.255.0.3 KubeProxyOptions:map[]}
I0428 17:10:37.593145   33446 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.255.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 10.255.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "10.255.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: 10.255.0.3:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 10.255.0.3:10249

I0428 17:10:37.593581   33446 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0428 17:10:37.682632   33446 kubeadm.go:723] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.255.0.3 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0428 17:10:37.682719   33446 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
I0428 17:10:37.709125   33446 binaries.go:43] Found k8s binaries, skipping transfer
I0428 17:10:37.709270   33446 ssh_runner.go:148] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0428 17:10:37.731348   33446 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1437 bytes)
I0428 17:10:37.764160   33446 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new (532 bytes)
I0428 17:10:37.803926   33446 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service.new (349 bytes)
I0428 17:10:37.822539   33446 ssh_runner.go:148] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new"
I0428 17:10:37.837202   33446 ssh_runner.go:148] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
I0428 17:10:37.870770   33446 ssh_runner.go:148] Run: sudo systemctl enable kubelet
I0428 17:10:37.963437   33446 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0428 17:10:38.055208   33446 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0428 17:10:38.075272   33446 kubeadm.go:784] reloadKubelet took 252.74769ms
I0428 17:10:38.075301   33446 kubeadm.go:705] reloadKubelet took 482.342893ms
I0428 17:10:38.075318   33446 certs.go:51] Setting up /home/chris/.minikube/profiles/minikube for IP: 10.255.0.3
I0428 17:10:38.075368   33446 certs.go:168] skipping minikubeCA CA generation: /home/chris/.minikube/ca.key
I0428 17:10:38.075392   33446 certs.go:168] skipping proxyClientCA CA generation: /home/chris/.minikube/proxy-client-ca.key
I0428 17:10:38.075447   33446 certs.go:266] generating minikube-user signed cert: /home/chris/.minikube/profiles/minikube/client.key
I0428 17:10:38.075457   33446 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/client.crt with IP's: []
I0428 17:10:38.315996   33446 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/client.crt ...
I0428 17:10:38.316087   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/client.crt: {Name:mka07a58dd5663c2670aeceac28b6f674efc8b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.316292   33446 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/client.key ...
I0428 17:10:38.316326   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/client.key: {Name:mkf7666bb385a6e9ae21189ba35d84d3b807484f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.316449   33446 certs.go:266] generating minikube signed cert: /home/chris/.minikube/profiles/minikube/apiserver.key.c6d6ce8e
I0428 17:10:38.316479   33446 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/apiserver.crt.c6d6ce8e with IP's: [10.255.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
I0428 17:10:38.402676   33446 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/apiserver.crt.c6d6ce8e ...
I0428 17:10:38.402795   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/apiserver.crt.c6d6ce8e: {Name:mk5b4e9f64c589982974f227ddd5bafae57aa503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.403475   33446 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/apiserver.key.c6d6ce8e ...
I0428 17:10:38.403552   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/apiserver.key.c6d6ce8e: {Name:mk93e08310566686118ac5f0cc01b60808c6de4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.403931   33446 certs.go:277] copying /home/chris/.minikube/profiles/minikube/apiserver.crt.c6d6ce8e -> /home/chris/.minikube/profiles/minikube/apiserver.crt
I0428 17:10:38.404265   33446 certs.go:281] copying /home/chris/.minikube/profiles/minikube/apiserver.key.c6d6ce8e -> /home/chris/.minikube/profiles/minikube/apiserver.key
I0428 17:10:38.404499   33446 certs.go:266] generating aggregator signed cert: /home/chris/.minikube/profiles/minikube/proxy-client.key
I0428 17:10:38.404531   33446 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0428 17:10:38.618956   33446 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/proxy-client.crt ...
I0428 17:10:38.619015   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/proxy-client.crt: {Name:mkdc81e217efe8a180042cd6a4ac0d23a55e96c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.619279   33446 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/proxy-client.key ...
I0428 17:10:38.619292   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/proxy-client.key: {Name:mk49b6ed585271800d057c65a7f077c5e7fbddc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.619435   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0428 17:10:38.619497   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0428 17:10:38.619524   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0428 17:10:38.619543   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0428 17:10:38.619563   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0428 17:10:38.619581   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0428 17:10:38.619598   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0428 17:10:38.619634   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0428 17:10:38.619724   33446 certs.go:341] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/ca-key.pem (1679 bytes)
I0428 17:10:38.619774   33446 certs.go:341] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/ca.pem (1034 bytes)
I0428 17:10:38.619832   33446 certs.go:341] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/cert.pem (1074 bytes)
I0428 17:10:38.619870   33446 certs.go:341] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/key.pem (1679 bytes)
I0428 17:10:38.619907   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0428 17:10:38.621050   33446 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1306 bytes)
I0428 17:10:38.640526   33446 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0428 17:10:38.683988   33446 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0428 17:10:38.702470   33446 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0428 17:10:38.748673   33446 ssh_runner.go:215] scp /home/chris/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0428 17:10:38.767540   33446 ssh_runner.go:215] scp /home/chris/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0428 17:10:38.806133   33446 ssh_runner.go:215] scp /home/chris/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0428 17:10:38.843861   33446 ssh_runner.go:215] scp /home/chris/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0428 17:10:38.866603   33446 ssh_runner.go:215] scp /home/chris/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0428 17:10:38.903598   33446 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0428 17:10:38.931250   33446 ssh_runner.go:148] Run: openssl version
I0428 17:10:38.962925   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0428 17:10:38.977444   33446 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0428 17:10:38.981780   33446 certs.go:382] hashing: -rw-r--r-- 1 root root 1066 Apr 27 00:27 /usr/share/ca-certificates/minikubeCA.pem
I0428 17:10:38.981888   33446 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0428 17:10:38.990760   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0428 17:10:39.013916   33446 kubeadm.go:279] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.255.0.3 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0428 17:10:39.014229   33446 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0428 17:10:39.121422   33446 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0428 17:10:39.136956   33446 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0428 17:10:39.148834   33446 kubeadm.go:197] ignoring SystemVerification for kubeadm because of docker driver
I0428 17:10:39.148891   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "grep https://10.255.0.3:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0428 17:10:39.168402   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "grep https://10.255.0.3:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0428 17:10:39.194156   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "grep https://10.255.0.3:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0428 17:10:39.222745   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "grep https://10.255.0.3:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0428 17:10:39.249783   33446 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0428 17:12:37.756946   33446 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (1m58.507101959s)
💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0428 07:10:39.344716     733 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
W0428 07:10:42.737921     733 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0428 07:10:42.739363     733 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

...
💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/4172

Optional: Full output of minikube logs command:

chris@chris-trotman-laptop ~ % minikube logs ==> Docker <== -- Logs begin at Tue 2020-04-28 07:10:28 UTC, end at Tue 2020-04-28 07:25:06 UTC. -- Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955447170Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955483783Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955512743Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955526815Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955627504Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006210a0, CONNECTING" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.956005527Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006210a0, READY" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957012697Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957042027Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957060042Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957070956Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957123360Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006e44b0, CONNECTING" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957411136Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006e44b0, READY" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.960835439Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.011890230Z" level=warning msg="Your kernel does not support cgroup rt period" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.011918626Z" level=warning msg="Your kernel does not support cgroup rt runtime" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.011930897Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.011939679Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.012247268Z" level=info msg="Loading containers: start." Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.144167586Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.225809008Z" level=info msg="Loading containers: done." Apr 28 07:10:30 minikube dockerd[117]: time="2020-04-28T07:10:30.487891961Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 07:10:30 minikube dockerd[117]: time="2020-04-28T07:10:30.488161860Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 Apr 28 07:10:30 minikube dockerd[117]: time="2020-04-28T07:10:30.488236914Z" level=info msg="Daemon has completed initialization" Apr 28 07:10:30 minikube dockerd[117]: time="2020-04-28T07:10:30.529699427Z" level=info msg="API listen on /run/docker.sock" Apr 28 07:10:30 minikube systemd[1]: Started Docker Application Container Engine. Apr 28 07:10:34 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Apr 28 07:10:34 minikube systemd[1]: Stopping Docker Application Container Engine... Apr 28 07:10:34 minikube dockerd[117]: time="2020-04-28T07:10:34.129070096Z" level=info msg="Processing signal 'terminated'" Apr 28 07:10:34 minikube dockerd[117]: time="2020-04-28T07:10:34.129996176Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Apr 28 07:10:34 minikube dockerd[117]: time="2020-04-28T07:10:34.130636295Z" level=info msg="Daemon shutdown complete" Apr 28 07:10:34 minikube systemd[1]: docker.service: Succeeded. Apr 28 07:10:34 minikube systemd[1]: Stopped Docker Application Container Engine. Apr 28 07:10:34 minikube systemd[1]: Starting Docker Application Container Engine... Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.186156320Z" level=info msg="Starting up" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.187977039Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188003557Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188043755Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188062194Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188193523Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00074fe80, CONNECTING" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188555491Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00074fe80, READY" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189221615Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189240699Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189256546Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189269570Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189312280Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007e65b0, CONNECTING" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189526352Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007e65b0, READY" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.191769901Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236664613Z" level=warning msg="Your kernel does not support cgroup rt period" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236702122Z" level=warning msg="Your kernel does not support cgroup rt runtime" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236714677Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236725144Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236916885Z" level=info msg="Loading containers: start." Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.381287962Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.442888341Z" level=info msg="Loading containers: done." Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.486587080Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.486828678Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.486872058Z" level=info msg="Daemon has completed initialization" Apr 28 07:10:36 minikube systemd[1]: Started Docker Application Container Engine. Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.515089778Z" level=info msg="API listen on /var/run/docker.sock" Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.515328023Z" level=info msg="API listen on [::]:2376"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID

==> describe nodes <==
E0428 17:25:06.962887 60236 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

==> dmesg <==
...

==> kernel <==
07:25:06 up 4:03, 0 users, load average: 1.57, 1.39, 1.16
Linux minikube 5.6.7-arch1-1 #1 SMP PREEMPT Thu, 23 Apr 2020 09:13:56 +0000 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kubelet <==
-- Logs begin at Tue 2020-04-28 07:10:28 UTC, end at Tue 2020-04-28 07:25:07 UTC. --
Apr 28 07:25:03 minikube kubelet[25347]: I0428 07:25:03.360443 25347 state_mem.go:88] [cpumanager] updated default cpuset: ""
Apr 28 07:25:03 minikube kubelet[25347]: I0428 07:25:03.360460 25347 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Apr 28 07:25:03 minikube kubelet[25347]: I0428 07:25:03.360477 25347 policy_none.go:43] [cpumanager] none policy: Start
Apr 28 07:25:03 minikube kubelet[25347]: W0428 07:25:03.360515 25347 fs.go:540] stat failed on /dev/mapper/cryptroot with error: no such file or directory
Apr 28 07:25:03 minikube kubelet[25347]: F0428 07:25:03.360543 25347 kubelet.go:1383] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 28 in cached partitions map
Apr 28 07:25:03 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Apr 28 07:25:03 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
...

Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.312374 25556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Apr 28 07:25:04 minikube kubelet[25556]: W0428 07:25:04.344205 25556 fs.go:206] stat failed on /dev/mapper/cryptroot with error: no such file or directory
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355387 25556 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355796 25556 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355815 25556 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355895 25556 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355903 25556 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355908 25556 container_manager_linux.go:306] Creating device plugin manager: true
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355995 25556 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.356007 25556 client.go:92] Start docker client with request timeout=2m0s
Apr 28 07:25:04 minikube kubelet[25556]: W0428 07:25:04.360846 25556 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.360869 25556 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.365791 25556 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.371571 25556 docker_service.go:258] Docker Info: &{ID:JJU7:OSC4:67QH:5P6G:ZRID:BJZK:5B3A:SRU5:K4BX:YQBV:2H22:MGXF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem btrfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2020-04-28T07:25:04.366616394Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.6.7-arch1-1 OperatingSystem:Ubuntu 19.10 (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0001b2fc0 NCPU:4 MemTotal:16670576640 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:19.03.2 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:449e926990f8539fd00844b26c07e2f1e306c760 Expected:449e926990f8539fd00844b26c07e2f1e306c760} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.371647 25556 docker_service.go:271] Setting cgroupDriver to cgroupfs
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378056 25556 remote_runtime.go:59] parsed scheme: ""
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378073 25556 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378104 25556 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378112 25556 clientconn.go:933] ClientConn switching balancer to "pick_first"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378163 25556 remote_image.go:50] parsed scheme: ""
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378173 25556 remote_image.go:50] scheme "" not registered, fallback to default scheme
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378184 25556 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378192 25556 clientconn.go:933] ClientConn switching balancer to "pick_first"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378223 25556 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378276 25556 kubelet.go:317] Watching apiserver
...

Metadata

Metadata

Assignees

No one assigned

    Labels

    co/docker-driverIssues related to kubernetes in containerhelp wantedDenotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.kind/documentationCategorizes issue or PR as related to documentation.kind/featureCategorizes issue or PR as related to a new feature.lifecycle/frozenIndicates that an issue or PR should not be auto-closed due to staleness.needs-faq-entryThings that could use documentation in a FAQneeds-problem-regexpriority/backlogHigher priority than priority/awaiting-more-evidence.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions