Running Suite: Kubernetes e2e suite =================================== Random Seed: 1677716098 - Will randomize all specs Will run 7052 specs Running in parallel across 47 nodes Mar 2 00:15:01.487: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 Mar 2 00:15:01.488: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 2 00:15:01.514: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 2 00:15:01.532: INFO: 3 / 3 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 2 00:15:01.532: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready. Mar 2 00:15:01.532: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 2 00:15:01.535: INFO: e2e test version: v1.23.17 Mar 2 00:15:01.536: INFO: kube-apiserver version: v1.23.16+k3s1 Mar 2 00:15:01.536: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 Mar 2 00:15:01.540: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.563: INFO: >>> kubeConfig: /tmp/kubeconfig-446130155 Mar 2 00:15:01.570: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.597: INFO: >>> kubeConfig: /tmp/kubeconfig-3761205576 Mar 2 00:15:01.605: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.597: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 Mar 2 00:15:01.605: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.611: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 Mar 2 00:15:01.624: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.624: INFO: >>> kubeConfig: /tmp/kubeconfig-3226875953 Mar 2 00:15:01.630: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.626: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 Mar 2 00:15:01.632: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.627: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 Mar 2 00:15:01.637: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.651: INFO: >>> kubeConfig: /tmp/kubeconfig-1252640407 Mar 2 00:15:01.659: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.655: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 Mar 2 00:15:01.667: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.653: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 Mar 2 00:15:01.668: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.650: INFO: >>> kubeConfig: /tmp/kubeconfig-3654524011 Mar 2 00:15:01.658: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.673: INFO: >>> kubeConfig: /tmp/kubeconfig-2945017463 Mar 2 00:15:01.682: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.675: INFO: >>> kubeConfig: /tmp/kubeconfig-343363664 Mar 2 00:15:01.693: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.676: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 Mar 2 00:15:01.698: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.700: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 Mar 2 00:15:01.714: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.708: INFO: >>> kubeConfig: /tmp/kubeconfig-2645107774 Mar 2 00:15:01.730: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.707: INFO: >>> kubeConfig: /tmp/kubeconfig-2095741199 Mar 2 00:15:01.733: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.713: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 Mar 2 00:15:01.734: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.717: INFO: >>> kubeConfig: /tmp/kubeconfig-1263463171 Mar 2 00:15:01.738: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.722: INFO: >>> kubeConfig: /tmp/kubeconfig-3213862209 Mar 2 00:15:01.742: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.731: INFO: >>> kubeConfig: /tmp/kubeconfig-2274336506 Mar 2 00:15:01.745: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.728: INFO: >>> kubeConfig: /tmp/kubeconfig-3747375592 Mar 2 00:15:01.746: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.740: INFO: >>> kubeConfig: /tmp/kubeconfig-3970272666 Mar 2 00:15:01.750: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.740: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 Mar 2 00:15:01.755: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.737: INFO: >>> kubeConfig: /tmp/kubeconfig-3471101731 Mar 2 00:15:01.754: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.754: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 Mar 2 00:15:01.768: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.755: INFO: >>> kubeConfig: /tmp/kubeconfig-3825818549 Mar 2 00:15:01.774: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.762: INFO: >>> kubeConfig: /tmp/kubeconfig-1722668237 Mar 2 00:15:01.782: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.764: INFO: >>> kubeConfig: /tmp/kubeconfig-2153458730 Mar 2 00:15:01.785: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.772: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 Mar 2 00:15:01.789: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.771: INFO: >>> kubeConfig: /tmp/kubeconfig-178066528 Mar 2 00:15:01.791: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1801756564 Mar 2 00:15:01.791: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.780: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 Mar 2 00:15:01.795: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.792: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 Mar 2 00:15:01.811: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.806: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 Mar 2 00:15:01.826: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.834: INFO: >>> kubeConfig: /tmp/kubeconfig-2820862176 Mar 2 00:15:01.846: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.834: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:15:01.849: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.849: INFO: >>> kubeConfig: /tmp/kubeconfig-200441725 Mar 2 00:15:01.865: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.853: INFO: >>> kubeConfig: /tmp/kubeconfig-3780238515 Mar 2 00:15:01.874: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.854: INFO: >>> kubeConfig: /tmp/kubeconfig-70536112 Mar 2 00:15:01.879: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.860: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 Mar 2 00:15:01.883: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.883: INFO: >>> kubeConfig: /tmp/kubeconfig-1605033327 Mar 2 00:15:01.903: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.887: INFO: >>> kubeConfig: /tmp/kubeconfig-2889160273 Mar 2 00:15:01.910: INFO: Cluster IP family: ipv4 Mar 2 00:15:01.896: INFO: >>> kubeConfig: /tmp/kubeconfig-1867053252 Mar 2 00:15:01.916: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Mar 2 00:15:01.952: INFO: >>> kubeConfig: /tmp/kubeconfig-1090799654 Mar 2 00:15:01.971: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ Mar 2 00:15:01.955: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 Mar 2 00:15:01.983: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.750: INFO: >>> kubeConfig: /tmp/kubeconfig-343363664 STEP: Building a namespace api object, basename configmap W0302 00:15:02.029649 223 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.029: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:02.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5447" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":1,"skipped":36,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.791: INFO: >>> kubeConfig: /tmp/kubeconfig-1722668237 STEP: Building a namespace api object, basename services W0302 00:15:02.210728 207 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.210: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should delete a collection of services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a collection of services Mar 2 00:15:02.214: INFO: Creating e2e-svc-a-zxg5v Mar 2 00:15:02.301: INFO: Creating e2e-svc-b-9bg2g Mar 2 00:15:02.335: INFO: Creating e2e-svc-c-cqpt6 STEP: deleting service collection Mar 2 00:15:02.576: INFO: Collection of services has been deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:02.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6683" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • ------------------------------ {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.914: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename events W0302 00:15:02.554093 210 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.554: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Create set of events Mar 2 00:15:02.576: INFO: created test-event-1 Mar 2 00:15:02.592: INFO: created test-event-2 Mar 2 00:15:02.612: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Mar 2 00:15:02.626: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Mar 2 00:15:02.773: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:02.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4370" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":1,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:02.013: INFO: >>> kubeConfig: /tmp/kubeconfig-1090799654 STEP: Building a namespace api object, basename events W0302 00:15:02.697759 339 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.697: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:02.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1215" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":1,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.692: INFO: >>> kubeConfig: /tmp/kubeconfig-2945017463 STEP: Building a namespace api object, basename gc W0302 00:15:01.820230 333 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.820: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0302 00:15:03.026451 333 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. Mar 2 00:15:03.026: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:03.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8708" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.799: INFO: >>> kubeConfig: /tmp/kubeconfig-3213862209 STEP: Building a namespace api object, basename watch W0302 00:15:02.340454 328 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.340: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:05.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2033" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":1,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.671: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 STEP: Building a namespace api object, basename crd-publish-openapi W0302 00:15:01.750137 316 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.750: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 2 00:15:01.758: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 Mar 2 00:15:04.170: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:12.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4643" for this suite. • [SLOW TEST:11.020 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:12.701: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:12.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3477" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:02.833: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 2 00:15:02.851: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9621 1ba7dd69-77c5-4a89-a3fb-233230f8ac13 1236 0 2023-03-02 00:15:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:15:02.852: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9621 1ba7dd69-77c5-4a89-a3fb-233230f8ac13 1238 0 2023-03-02 00:15:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:15:02.852: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9621 1ba7dd69-77c5-4a89-a3fb-233230f8ac13 1240 0 2023-03-02 00:15:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 2 00:15:12.868: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9621 1ba7dd69-77c5-4a89-a3fb-233230f8ac13 2220 0 2023-03-02 00:15:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:15:12.868: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9621 1ba7dd69-77c5-4a89-a3fb-233230f8ac13 2222 0 2023-03-02 00:15:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:15:12.868: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9621 1ba7dd69-77c5-4a89-a3fb-233230f8ac13 2223 0 2023-03-02 00:15:02 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:12.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9621" for this suite. • [SLOW TEST:10.041 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":2,"skipped":96,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1605033327 STEP: Building a namespace api object, basename crd-publish-openapi W0302 00:15:02.601646 201 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.601: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: set up a multi version CRD Mar 2 00:15:02.605: INFO: >>> kubeConfig: /tmp/kubeconfig-1605033327 STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:16.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4221" for this suite. • [SLOW TEST:14.594 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.802: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename resourcequota W0302 00:15:02.354012 259 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.354: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:18.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8364" for this suite. • [SLOW TEST:16.619 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":1,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.791: INFO: >>> kubeConfig: /tmp/kubeconfig-178066528 STEP: Building a namespace api object, basename gc W0302 00:15:02.266466 212 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.266: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Mar 2 00:15:22.497: INFO: 69 pods remaining Mar 2 00:15:22.497: INFO: 69 pods has nil DeletionTimestamp Mar 2 00:15:22.497: INFO: STEP: Gathering metrics W0302 00:15:27.499650 212 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. Mar 2 00:15:27.499: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Mar 2 00:15:27.499: INFO: Deleting pod "simpletest-rc-to-be-deleted-ddjmx" in namespace "gc-5746" Mar 2 00:15:27.505: INFO: Deleting pod "simpletest-rc-to-be-deleted-2lhg4" in namespace "gc-5746" Mar 2 00:15:27.512: INFO: Deleting pod "simpletest-rc-to-be-deleted-q84v9" in namespace "gc-5746" Mar 2 00:15:27.517: INFO: Deleting pod "simpletest-rc-to-be-deleted-z62x2" in namespace "gc-5746" Mar 2 00:15:27.523: INFO: Deleting pod "simpletest-rc-to-be-deleted-tm5f5" in namespace "gc-5746" Mar 2 00:15:27.528: INFO: Deleting pod "simpletest-rc-to-be-deleted-mpxqn" in namespace "gc-5746" Mar 2 00:15:27.535: INFO: Deleting pod "simpletest-rc-to-be-deleted-f6m2w" in namespace "gc-5746" Mar 2 00:15:27.541: INFO: Deleting pod "simpletest-rc-to-be-deleted-lw8kn" in namespace "gc-5746" Mar 2 00:15:27.546: INFO: Deleting pod "simpletest-rc-to-be-deleted-j646g" in namespace "gc-5746" Mar 2 00:15:27.551: INFO: Deleting pod "simpletest-rc-to-be-deleted-4kscn" in namespace "gc-5746" Mar 2 00:15:27.556: INFO: Deleting pod "simpletest-rc-to-be-deleted-fqxkn" in namespace "gc-5746" Mar 2 00:15:27.561: INFO: Deleting pod "simpletest-rc-to-be-deleted-cbkr7" in namespace "gc-5746" Mar 2 00:15:27.566: INFO: Deleting pod "simpletest-rc-to-be-deleted-6fcfv" in namespace "gc-5746" Mar 2 00:15:27.571: INFO: Deleting pod "simpletest-rc-to-be-deleted-8lg56" in namespace "gc-5746" Mar 2 00:15:27.577: INFO: Deleting pod "simpletest-rc-to-be-deleted-75tvv" in namespace "gc-5746" Mar 2 00:15:27.583: INFO: Deleting pod "simpletest-rc-to-be-deleted-2gckv" in namespace "gc-5746" Mar 2 00:15:27.588: INFO: Deleting pod "simpletest-rc-to-be-deleted-lxpfm" in namespace "gc-5746" Mar 2 00:15:27.593: INFO: Deleting pod "simpletest-rc-to-be-deleted-6hp2z" in namespace "gc-5746" Mar 2 00:15:27.598: INFO: Deleting pod "simpletest-rc-to-be-deleted-vcdll" in namespace "gc-5746" Mar 2 00:15:27.604: INFO: Deleting pod "simpletest-rc-to-be-deleted-cnsw4" in namespace "gc-5746" Mar 2 00:15:27.611: INFO: Deleting pod "simpletest-rc-to-be-deleted-lhpws" in namespace "gc-5746" Mar 2 00:15:27.617: INFO: Deleting pod "simpletest-rc-to-be-deleted-7j5hg" in namespace "gc-5746" Mar 2 00:15:27.624: INFO: Deleting pod "simpletest-rc-to-be-deleted-h6nf7" in namespace "gc-5746" Mar 2 00:15:27.629: INFO: Deleting pod "simpletest-rc-to-be-deleted-crfpj" in namespace "gc-5746" Mar 2 00:15:27.634: INFO: Deleting pod "simpletest-rc-to-be-deleted-dxvbk" in namespace "gc-5746" Mar 2 00:15:27.639: INFO: Deleting pod "simpletest-rc-to-be-deleted-jwk5m" in namespace "gc-5746" Mar 2 00:15:27.644: INFO: Deleting pod "simpletest-rc-to-be-deleted-2wdrr" in namespace "gc-5746" Mar 2 00:15:27.649: INFO: Deleting pod "simpletest-rc-to-be-deleted-vlcpw" in namespace "gc-5746" Mar 2 00:15:27.656: INFO: Deleting pod "simpletest-rc-to-be-deleted-vrhqz" in namespace "gc-5746" Mar 2 00:15:27.662: INFO: Deleting pod "simpletest-rc-to-be-deleted-lv9fj" in namespace "gc-5746" Mar 2 00:15:27.668: INFO: Deleting pod "simpletest-rc-to-be-deleted-92jcd" in namespace "gc-5746" Mar 2 00:15:27.673: INFO: Deleting pod "simpletest-rc-to-be-deleted-7v7df" in namespace "gc-5746" Mar 2 00:15:27.680: INFO: Deleting pod "simpletest-rc-to-be-deleted-td7mh" in namespace "gc-5746" Mar 2 00:15:27.685: INFO: Deleting pod "simpletest-rc-to-be-deleted-x8cll" in namespace "gc-5746" Mar 2 00:15:27.691: INFO: Deleting pod "simpletest-rc-to-be-deleted-tkp5f" in namespace "gc-5746" Mar 2 00:15:27.697: INFO: Deleting pod "simpletest-rc-to-be-deleted-qjb82" in namespace "gc-5746" Mar 2 00:15:27.702: INFO: Deleting pod "simpletest-rc-to-be-deleted-7vlwj" in namespace "gc-5746" Mar 2 00:15:27.707: INFO: Deleting pod "simpletest-rc-to-be-deleted-hnxnb" in namespace "gc-5746" Mar 2 00:15:27.712: INFO: Deleting pod "simpletest-rc-to-be-deleted-kcm96" in namespace "gc-5746" Mar 2 00:15:27.718: INFO: Deleting pod "simpletest-rc-to-be-deleted-z8nwv" in namespace "gc-5746" Mar 2 00:15:27.723: INFO: Deleting pod "simpletest-rc-to-be-deleted-b5t2f" in namespace "gc-5746" Mar 2 00:15:27.730: INFO: Deleting pod "simpletest-rc-to-be-deleted-sjmp5" in namespace "gc-5746" Mar 2 00:15:27.736: INFO: Deleting pod "simpletest-rc-to-be-deleted-qjqzn" in namespace "gc-5746" Mar 2 00:15:27.743: INFO: Deleting pod "simpletest-rc-to-be-deleted-rdwbr" in namespace "gc-5746" Mar 2 00:15:27.750: INFO: Deleting pod "simpletest-rc-to-be-deleted-68gss" in namespace "gc-5746" Mar 2 00:15:27.756: INFO: Deleting pod "simpletest-rc-to-be-deleted-9kn44" in namespace "gc-5746" Mar 2 00:15:27.761: INFO: Deleting pod "simpletest-rc-to-be-deleted-9qkm4" in namespace "gc-5746" Mar 2 00:15:27.766: INFO: Deleting pod "simpletest-rc-to-be-deleted-hgs94" in namespace "gc-5746" Mar 2 00:15:27.772: INFO: Deleting pod "simpletest-rc-to-be-deleted-5nphv" in namespace "gc-5746" Mar 2 00:15:27.782: INFO: Deleting pod "simpletest-rc-to-be-deleted-ztlt8" in namespace "gc-5746" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:27.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5746" for this suite. • [SLOW TEST:26.002 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.765: INFO: >>> kubeConfig: /tmp/kubeconfig-3471101731 STEP: Building a namespace api object, basename resourcequota W0302 00:15:02.096840 246 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.096: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:30.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7802" for this suite. • [SLOW TEST:28.368 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:30.140: INFO: >>> kubeConfig: /tmp/kubeconfig-3471101731 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: validating api versions Mar 2 00:15:30.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3471101731 --namespace=kubectl-3215 api-versions' Mar 2 00:15:30.192: INFO: stderr: "" Mar 2 00:15:30.192: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nhelm.cattle.io/v1\nk3s.cattle.io/v1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:30.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3215" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.697: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename gc W0302 00:15:01.909308 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.909: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0302 00:15:46.982765 26 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. Mar 2 00:15:46.982: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Mar 2 00:15:46.982: INFO: Deleting pod "simpletest.rc-pww4j" in namespace "gc-4285" Mar 2 00:15:46.988: INFO: Deleting pod "simpletest.rc-fg6vx" in namespace "gc-4285" Mar 2 00:15:46.993: INFO: Deleting pod "simpletest.rc-dktqb" in namespace "gc-4285" Mar 2 00:15:46.999: INFO: Deleting pod "simpletest.rc-tjlfc" in namespace "gc-4285" Mar 2 00:15:47.012: INFO: Deleting pod "simpletest.rc-7j7wn" in namespace "gc-4285" Mar 2 00:15:47.018: INFO: Deleting pod "simpletest.rc-f9p46" in namespace "gc-4285" Mar 2 00:15:47.025: INFO: Deleting pod "simpletest.rc-x2cr2" in namespace "gc-4285" Mar 2 00:15:47.030: INFO: Deleting pod "simpletest.rc-gthx6" in namespace "gc-4285" Mar 2 00:15:47.035: INFO: Deleting pod "simpletest.rc-j845g" in namespace "gc-4285" Mar 2 00:15:47.040: INFO: Deleting pod "simpletest.rc-g75hf" in namespace "gc-4285" Mar 2 00:15:47.046: INFO: Deleting pod "simpletest.rc-gc62j" in namespace "gc-4285" Mar 2 00:15:47.051: INFO: Deleting pod "simpletest.rc-z2lc8" in namespace "gc-4285" Mar 2 00:15:47.057: INFO: Deleting pod "simpletest.rc-j58sk" in namespace "gc-4285" Mar 2 00:15:47.065: INFO: Deleting pod "simpletest.rc-shddr" in namespace "gc-4285" Mar 2 00:15:47.070: INFO: Deleting pod "simpletest.rc-ljp8t" in namespace "gc-4285" Mar 2 00:15:47.076: INFO: Deleting pod "simpletest.rc-97wsm" in namespace "gc-4285" Mar 2 00:15:47.082: INFO: Deleting pod "simpletest.rc-vscp5" in namespace "gc-4285" Mar 2 00:15:47.089: INFO: Deleting pod "simpletest.rc-blknj" in namespace "gc-4285" Mar 2 00:15:47.095: INFO: Deleting pod "simpletest.rc-lxw5q" in namespace "gc-4285" Mar 2 00:15:47.101: INFO: Deleting pod "simpletest.rc-8865x" in namespace "gc-4285" Mar 2 00:15:47.107: INFO: Deleting pod "simpletest.rc-w6zr5" in namespace "gc-4285" Mar 2 00:15:47.113: INFO: Deleting pod "simpletest.rc-mkg4r" in namespace "gc-4285" Mar 2 00:15:47.119: INFO: Deleting pod "simpletest.rc-jqxff" in namespace "gc-4285" Mar 2 00:15:47.127: INFO: Deleting pod "simpletest.rc-q82b6" in namespace "gc-4285" Mar 2 00:15:47.133: INFO: Deleting pod "simpletest.rc-6gl6l" in namespace "gc-4285" Mar 2 00:15:47.139: INFO: Deleting pod "simpletest.rc-n659p" in namespace "gc-4285" Mar 2 00:15:47.145: INFO: Deleting pod "simpletest.rc-7g68m" in namespace "gc-4285" Mar 2 00:15:47.150: INFO: Deleting pod "simpletest.rc-wrchd" in namespace "gc-4285" Mar 2 00:15:47.156: INFO: Deleting pod "simpletest.rc-zjzr6" in namespace "gc-4285" Mar 2 00:15:47.162: INFO: Deleting pod "simpletest.rc-kl4s8" in namespace "gc-4285" Mar 2 00:15:47.167: INFO: Deleting pod "simpletest.rc-jlppp" in namespace "gc-4285" Mar 2 00:15:47.173: INFO: Deleting pod "simpletest.rc-g2v5b" in namespace "gc-4285" Mar 2 00:15:47.178: INFO: Deleting pod "simpletest.rc-lnhd7" in namespace "gc-4285" Mar 2 00:15:47.185: INFO: Deleting pod "simpletest.rc-l79lc" in namespace "gc-4285" Mar 2 00:15:47.191: INFO: Deleting pod "simpletest.rc-gt99s" in namespace "gc-4285" Mar 2 00:15:47.196: INFO: Deleting pod "simpletest.rc-n4rkh" in namespace "gc-4285" Mar 2 00:15:47.202: INFO: Deleting pod "simpletest.rc-8267l" in namespace "gc-4285" Mar 2 00:15:47.207: INFO: Deleting pod "simpletest.rc-ff8ft" in namespace "gc-4285" Mar 2 00:15:47.213: INFO: Deleting pod "simpletest.rc-wb2l4" in namespace "gc-4285" Mar 2 00:15:47.219: INFO: Deleting pod "simpletest.rc-4qmfm" in namespace "gc-4285" Mar 2 00:15:47.225: INFO: Deleting pod "simpletest.rc-wrmjd" in namespace "gc-4285" Mar 2 00:15:47.230: INFO: Deleting pod "simpletest.rc-v68fx" in namespace "gc-4285" Mar 2 00:15:47.237: INFO: Deleting pod "simpletest.rc-2gjxx" in namespace "gc-4285" Mar 2 00:15:47.243: INFO: Deleting pod "simpletest.rc-c4xkf" in namespace "gc-4285" Mar 2 00:15:47.248: INFO: Deleting pod "simpletest.rc-t644n" in namespace "gc-4285" Mar 2 00:15:47.253: INFO: Deleting pod "simpletest.rc-x6nfq" in namespace "gc-4285" Mar 2 00:15:47.259: INFO: Deleting pod "simpletest.rc-d2spg" in namespace "gc-4285" Mar 2 00:15:47.264: INFO: Deleting pod "simpletest.rc-lb42j" in namespace "gc-4285" Mar 2 00:15:47.271: INFO: Deleting pod "simpletest.rc-55fmh" in namespace "gc-4285" Mar 2 00:15:47.279: INFO: Deleting pod "simpletest.rc-mkv9d" in namespace "gc-4285" Mar 2 00:15:47.298: INFO: Deleting pod "simpletest.rc-lh9ld" in namespace "gc-4285" Mar 2 00:15:47.309: INFO: Deleting pod "simpletest.rc-ck5zd" in namespace "gc-4285" Mar 2 00:15:47.317: INFO: Deleting pod "simpletest.rc-7dbhb" in namespace "gc-4285" Mar 2 00:15:47.326: INFO: Deleting pod "simpletest.rc-gt2ld" in namespace "gc-4285" Mar 2 00:15:47.336: INFO: Deleting pod "simpletest.rc-qzx2h" in namespace "gc-4285" Mar 2 00:15:47.344: INFO: Deleting pod "simpletest.rc-59qc8" in namespace "gc-4285" Mar 2 00:15:47.381: INFO: Deleting pod "simpletest.rc-6sh5v" in namespace "gc-4285" Mar 2 00:15:47.431: INFO: Deleting pod "simpletest.rc-z5xxd" in namespace "gc-4285" Mar 2 00:15:47.481: INFO: Deleting pod "simpletest.rc-vmstb" in namespace "gc-4285" Mar 2 00:15:47.532: INFO: Deleting pod "simpletest.rc-vfsqz" in namespace "gc-4285" Mar 2 00:15:47.582: INFO: Deleting pod "simpletest.rc-wtljr" in namespace "gc-4285" Mar 2 00:15:47.630: INFO: Deleting pod "simpletest.rc-rddxq" in namespace "gc-4285" Mar 2 00:15:47.680: INFO: Deleting pod "simpletest.rc-7xwhx" in namespace "gc-4285" Mar 2 00:15:47.730: INFO: Deleting pod "simpletest.rc-mzr9x" in namespace "gc-4285" Mar 2 00:15:47.782: INFO: Deleting pod "simpletest.rc-4mnn6" in namespace "gc-4285" Mar 2 00:15:47.830: INFO: Deleting pod "simpletest.rc-lhwjr" in namespace "gc-4285" Mar 2 00:15:47.883: INFO: Deleting pod "simpletest.rc-q6hwd" in namespace "gc-4285" Mar 2 00:15:47.931: INFO: Deleting pod "simpletest.rc-c7jf7" in namespace "gc-4285" Mar 2 00:15:47.983: INFO: Deleting pod "simpletest.rc-ghsxz" in namespace "gc-4285" Mar 2 00:15:48.031: INFO: Deleting pod "simpletest.rc-7g9b2" in namespace "gc-4285" Mar 2 00:15:48.082: INFO: Deleting pod "simpletest.rc-zg4bj" in namespace "gc-4285" Mar 2 00:15:48.131: INFO: Deleting pod "simpletest.rc-l7jcd" in namespace "gc-4285" Mar 2 00:15:48.179: INFO: Deleting pod "simpletest.rc-xvb8r" in namespace "gc-4285" Mar 2 00:15:48.231: INFO: Deleting pod "simpletest.rc-n2lnk" in namespace "gc-4285" Mar 2 00:15:48.280: INFO: Deleting pod "simpletest.rc-s7l9j" in namespace "gc-4285" Mar 2 00:15:48.330: INFO: Deleting pod "simpletest.rc-tv6p8" in namespace "gc-4285" Mar 2 00:15:48.382: INFO: Deleting pod "simpletest.rc-xbbrf" in namespace "gc-4285" Mar 2 00:15:48.433: INFO: Deleting pod "simpletest.rc-5xhrm" in namespace "gc-4285" Mar 2 00:15:48.481: INFO: Deleting pod "simpletest.rc-bl6gk" in namespace "gc-4285" Mar 2 00:15:48.531: INFO: Deleting pod "simpletest.rc-w85fl" in namespace "gc-4285" Mar 2 00:15:48.581: INFO: Deleting pod "simpletest.rc-hczn5" in namespace "gc-4285" Mar 2 00:15:48.633: INFO: Deleting pod "simpletest.rc-rhfkv" in namespace "gc-4285" Mar 2 00:15:48.681: INFO: Deleting pod "simpletest.rc-t4h5m" in namespace "gc-4285" Mar 2 00:15:48.731: INFO: Deleting pod "simpletest.rc-qdch2" in namespace "gc-4285" Mar 2 00:15:48.780: INFO: Deleting pod "simpletest.rc-ckr9r" in namespace "gc-4285" Mar 2 00:15:48.830: INFO: Deleting pod "simpletest.rc-6wmxt" in namespace "gc-4285" Mar 2 00:15:48.879: INFO: Deleting pod "simpletest.rc-n6wxn" in namespace "gc-4285" Mar 2 00:15:48.930: INFO: Deleting pod "simpletest.rc-xfnf7" in namespace "gc-4285" Mar 2 00:15:48.981: INFO: Deleting pod "simpletest.rc-z7v6h" in namespace "gc-4285" Mar 2 00:15:49.031: INFO: Deleting pod "simpletest.rc-gk246" in namespace "gc-4285" Mar 2 00:15:49.080: INFO: Deleting pod "simpletest.rc-kqmjj" in namespace "gc-4285" Mar 2 00:15:49.135: INFO: Deleting pod "simpletest.rc-dzw8v" in namespace "gc-4285" Mar 2 00:15:49.180: INFO: Deleting pod "simpletest.rc-b7gjc" in namespace "gc-4285" Mar 2 00:15:49.230: INFO: Deleting pod "simpletest.rc-zrb4l" in namespace "gc-4285" Mar 2 00:15:49.281: INFO: Deleting pod "simpletest.rc-b584g" in namespace "gc-4285" Mar 2 00:15:49.332: INFO: Deleting pod "simpletest.rc-9rwzd" in namespace "gc-4285" Mar 2 00:15:49.380: INFO: Deleting pod "simpletest.rc-kx9sk" in namespace "gc-4285" Mar 2 00:15:49.431: INFO: Deleting pod "simpletest.rc-p4kpk" in namespace "gc-4285" Mar 2 00:15:49.480: INFO: Deleting pod "simpletest.rc-sr7g2" in namespace "gc-4285" Mar 2 00:15:49.530: INFO: Deleting pod "simpletest.rc-8zjhw" in namespace "gc-4285" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:15:49.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4285" for this suite. • [SLOW TEST:47.981 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.684: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename configmap W0302 00:15:01.777157 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.777: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-test-upd-f803dc51-de3f-4384-9831-caf38355a002 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:06.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3397" for this suite. • [SLOW TEST:64.607 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.825: INFO: >>> kubeConfig: /tmp/kubeconfig-3825818549 STEP: Building a namespace api object, basename emptydir W0302 00:15:02.384722 325 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.384: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 2 00:15:02.422: INFO: Waiting up to 5m0s for pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b" in namespace "emptydir-14" to be "Succeeded or Failed" Mar 2 00:15:02.434: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.705755ms Mar 2 00:15:04.436: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013877565s Mar 2 00:15:06.439: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016680865s Mar 2 00:15:08.441: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019330044s Mar 2 00:15:10.444: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022095285s Mar 2 00:15:12.447: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024888032s Mar 2 00:15:14.449: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026746306s Mar 2 00:15:16.451: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.028867299s Mar 2 00:15:18.454: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.031749329s Mar 2 00:15:20.456: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.033913569s Mar 2 00:15:22.459: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.036816932s Mar 2 00:15:24.461: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.038965224s Mar 2 00:15:26.463: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.041197055s Mar 2 00:15:28.466: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.043759997s Mar 2 00:15:30.468: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.045924724s Mar 2 00:15:32.471: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.049260866s Mar 2 00:15:34.474: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.052074768s Mar 2 00:15:36.476: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.053812228s Mar 2 00:15:38.478: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.056333275s Mar 2 00:15:40.481: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.058711213s Mar 2 00:15:42.483: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.061230601s Mar 2 00:15:44.485: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.063251912s Mar 2 00:15:46.488: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.065870361s Mar 2 00:15:48.491: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 46.06881083s Mar 2 00:15:50.494: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 48.071490205s Mar 2 00:15:52.496: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 50.074288756s Mar 2 00:15:54.499: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.076883301s Mar 2 00:15:56.502: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.079581143s Mar 2 00:15:58.504: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 56.08213883s Mar 2 00:16:00.507: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 58.084889964s Mar 2 00:16:02.510: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.087684842s Mar 2 00:16:04.513: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.090611461s Mar 2 00:16:06.516: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m4.093573798s STEP: Saw pod success Mar 2 00:16:06.516: INFO: Pod "pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b" satisfied condition "Succeeded or Failed" Mar 2 00:16:06.517: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b container test-container: STEP: delete the pod Mar 2 00:16:06.527: INFO: Waiting for pod pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b to disappear Mar 2 00:16:06.529: INFO: Pod pod-91ecf9c8-ac7f-4317-bcc8-78300511c33b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:06.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-14" for this suite. • [SLOW TEST:64.710 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:16.504: INFO: >>> kubeConfig: /tmp/kubeconfig-1605033327 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name secret-test-d62c76f1-ba2c-4d2c-9438-54593fe7e534 STEP: Creating a pod to test consume secrets Mar 2 00:15:16.521: INFO: Waiting up to 5m0s for pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7" in namespace "secrets-6723" to be "Succeeded or Failed" Mar 2 00:15:16.523: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.716648ms Mar 2 00:15:18.528: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006497602s Mar 2 00:15:20.530: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008345402s Mar 2 00:15:22.532: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010836093s Mar 2 00:15:24.534: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012888673s Mar 2 00:15:26.537: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015288935s Mar 2 00:15:28.539: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.018049542s Mar 2 00:15:30.542: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.021015119s Mar 2 00:15:32.545: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.024147134s Mar 2 00:15:34.549: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.028069309s Mar 2 00:15:36.552: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.030576812s Mar 2 00:15:38.553: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.032246293s Mar 2 00:15:40.556: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.035134119s Mar 2 00:15:42.558: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.03707555s Mar 2 00:15:44.560: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.039203745s Mar 2 00:15:46.563: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.04182512s Mar 2 00:15:48.566: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.044384476s Mar 2 00:15:50.568: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.046718746s Mar 2 00:15:52.571: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.049430253s Mar 2 00:15:54.573: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.052202916s Mar 2 00:15:56.576: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.055091221s Mar 2 00:15:58.578: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.056906027s Mar 2 00:16:00.581: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.059518759s Mar 2 00:16:02.583: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 46.062164174s Mar 2 00:16:04.585: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Pending", Reason="", readiness=false. Elapsed: 48.064159356s Mar 2 00:16:06.588: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.066784373s STEP: Saw pod success Mar 2 00:16:06.588: INFO: Pod "pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7" satisfied condition "Succeeded or Failed" Mar 2 00:16:06.590: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7 container secret-volume-test: STEP: delete the pod Mar 2 00:16:06.599: INFO: Waiting for pod pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7 to disappear Mar 2 00:16:06.601: INFO: Pod pod-secrets-4fa41347-fd02-41c8-8405-567bf0b8e7f7 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:06.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6723" for this suite. • [SLOW TEST:50.102 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:12.882: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a Deployment Mar 2 00:15:12.892: INFO: Creating simple deployment test-deployment-g6cgd Mar 2 00:15:12.901: INFO: deployment "test-deployment-g6cgd" doesn't have the required revision set Mar 2 00:15:14.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:16.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:18.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:20.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:22.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:24.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:26.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:28.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:30.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:32.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:34.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:36.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:38.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:40.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:42.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:44.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:46.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:48.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:50.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:52.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:54.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:56.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:58.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:00.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:02.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:04.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Getting /status Mar 2 00:16:06.914: INFO: Deployment test-deployment-g6cgd has Conditions: [{Available True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:16:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:15:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-g6cgd-764bc7c4b7" has successfully progressed.}] STEP: updating Deployment Status Mar 2 00:16:06.918: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 5, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 12, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-g6cgd-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Deployment status to be updated Mar 2 00:16:06.919: INFO: Observed &Deployment event: ADDED Mar 2 00:16:06.919: INFO: Observed Deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-g6cgd-764bc7c4b7"} Mar 2 00:16:06.919: INFO: Observed &Deployment event: MODIFIED Mar 2 00:16:06.919: INFO: Observed Deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-g6cgd-764bc7c4b7"} Mar 2 00:16:06.919: INFO: Observed Deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Mar 2 00:16:06.920: INFO: Observed &Deployment event: MODIFIED Mar 2 00:16:06.920: INFO: Observed Deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Mar 2 00:16:06.920: INFO: Observed Deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-g6cgd-764bc7c4b7" is progressing.} Mar 2 00:16:06.920: INFO: Observed &Deployment event: MODIFIED Mar 2 00:16:06.920: INFO: Observed Deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:16:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Mar 2 00:16:06.920: INFO: Observed Deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:15:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-g6cgd-764bc7c4b7" has successfully progressed.} Mar 2 00:16:06.920: INFO: Observed &Deployment event: MODIFIED Mar 2 00:16:06.920: INFO: Observed Deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:16:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Mar 2 00:16:06.920: INFO: Observed Deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:15:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-g6cgd-764bc7c4b7" has successfully progressed.} Mar 2 00:16:06.920: INFO: Found Deployment test-deployment-g6cgd in namespace deployment-787 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Mar 2 00:16:06.920: INFO: Deployment test-deployment-g6cgd has an updated status STEP: patching the Statefulset Status Mar 2 00:16:06.920: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Mar 2 00:16:06.923: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} STEP: watching for the Deployment status to be patched Mar 2 00:16:06.924: INFO: Observed &Deployment event: ADDED Mar 2 00:16:06.924: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-g6cgd-764bc7c4b7"} Mar 2 00:16:06.924: INFO: Observed &Deployment event: MODIFIED Mar 2 00:16:06.924: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-g6cgd-764bc7c4b7"} Mar 2 00:16:06.924: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Mar 2 00:16:06.925: INFO: Observed &Deployment event: MODIFIED Mar 2 00:16:06.925: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Mar 2 00:16:06.925: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:15:12 +0000 UTC 2023-03-02 00:15:12 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-g6cgd-764bc7c4b7" is progressing.} Mar 2 00:16:06.925: INFO: Observed &Deployment event: MODIFIED Mar 2 00:16:06.925: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:16:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Mar 2 00:16:06.925: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:15:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-g6cgd-764bc7c4b7" has successfully progressed.} Mar 2 00:16:06.925: INFO: Observed &Deployment event: MODIFIED Mar 2 00:16:06.925: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:16:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Mar 2 00:16:06.925: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-03-02 00:16:05 +0000 UTC 2023-03-02 00:15:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-g6cgd-764bc7c4b7" has successfully progressed.} Mar 2 00:16:06.925: INFO: Observed deployment test-deployment-g6cgd in namespace deployment-787 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Mar 2 00:16:06.925: INFO: Observed &Deployment event: MODIFIED Mar 2 00:16:06.925: INFO: Found deployment test-deployment-g6cgd in namespace deployment-787 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } Mar 2 00:16:06.925: INFO: Deployment test-deployment-g6cgd has a patched status [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Mar 2 00:16:06.927: INFO: Deployment "test-deployment-g6cgd": &Deployment{ObjectMeta:{test-deployment-g6cgd deployment-787 873ce286-b6ef-477e-ab5f-e7b54ba76701 4105 1 2023-03-02 00:15:12 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-03-02 00:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-03-02 00:16:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049fa208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 2 00:16:06.930: INFO: New ReplicaSet "test-deployment-g6cgd-764bc7c4b7" of Deployment "test-deployment-g6cgd": &ReplicaSet{ObjectMeta:{test-deployment-g6cgd-764bc7c4b7 deployment-787 2a915abe-e4af-477d-9382-53bf51e78c30 4057 1 2023-03-02 00:15:12 +0000 UTC map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-g6cgd 873ce286-b6ef-477e-ab5f-e7b54ba76701 0xc0047c2947 0xc0047c2948}] [] [{k3s Update apps/v1 2023-03-02 00:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873ce286-b6ef-477e-ab5f-e7b54ba76701\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:16:05 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0047c2a08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:16:06.933: INFO: Pod "test-deployment-g6cgd-764bc7c4b7-x8l4k" is available: &Pod{ObjectMeta:{test-deployment-g6cgd-764bc7c4b7-x8l4k test-deployment-g6cgd-764bc7c4b7- deployment-787 574ffdc1-2fb6-4dec-b9b9-f95875ccce93 4056 0 2023-03-02 00:15:12 +0000 UTC map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [{apps/v1 ReplicaSet test-deployment-g6cgd-764bc7c4b7 2a915abe-e4af-477d-9382-53bf51e78c30 0xc004a203a7 0xc004a203a8}] [] [{k3s Update v1 2023-03-02 00:15:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a915abe-e4af-477d-9382-53bf51e78c30\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zc79j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zc79j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.103,StartTime:2023-03-02 00:15:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:15:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://230ce34b7869cbd17bbde713853456635a387700d036588ec475d57193898501,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:06.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-787" for this suite. • [SLOW TEST:54.064 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":3,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:49.710: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 2 00:15:49.739: INFO: Waiting up to 5m0s for pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390" in namespace "emptydir-806" to be "Succeeded or Failed" Mar 2 00:15:49.741: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113541ms Mar 2 00:15:51.744: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00452292s Mar 2 00:15:53.747: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008351744s Mar 2 00:15:55.750: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011026278s Mar 2 00:15:57.753: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013624677s Mar 2 00:15:59.755: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01569776s Mar 2 00:16:01.758: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Pending", Reason="", readiness=false. Elapsed: 12.01850108s Mar 2 00:16:03.760: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Pending", Reason="", readiness=false. Elapsed: 14.021005223s Mar 2 00:16:05.763: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Pending", Reason="", readiness=false. Elapsed: 16.023908124s Mar 2 00:16:07.766: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.026870889s STEP: Saw pod success Mar 2 00:16:07.766: INFO: Pod "pod-9ec854c3-ab35-4f86-8d0b-142e348c2390" satisfied condition "Succeeded or Failed" Mar 2 00:16:07.768: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-9ec854c3-ab35-4f86-8d0b-142e348c2390 container test-container: STEP: delete the pod Mar 2 00:16:07.777: INFO: Waiting for pod pod-9ec854c3-ab35-4f86-8d0b-142e348c2390 to disappear Mar 2 00:16:07.780: INFO: Pod pod-9ec854c3-ab35-4f86-8d0b-142e348c2390 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:07.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-806" for this suite. • [SLOW TEST:18.076 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":77,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:18.434: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Mar 2 00:15:19.007: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:15:19.016: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:15:21.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:23.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:25.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:27.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:29.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:31.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:33.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:35.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:37.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:39.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:41.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:43.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:45.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:47.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:49.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:51.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:53.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:55.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:57.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:59.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:01.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:03.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 19, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:16:06.033: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:06.034: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8524-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:09.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3921" for this suite. STEP: Destroying namespace "webhook-3921-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:50.680 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.774: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 STEP: Building a namespace api object, basename crd-webhook W0302 00:15:02.205373 198 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.205: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 2 00:15:02.391: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 2 00:15:04.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:06.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:08.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:10.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:12.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:14.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:16.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:18.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:20.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:22.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:24.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:26.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:28.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:30.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:32.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:34.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:36.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:38.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:40.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:42.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:44.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:46.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:48.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:50.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:52.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:54.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:56.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:58.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:00.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:02.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:04.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:06.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:16:09.406: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:09.409: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:12.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5331" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:70.750 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":1,"skipped":41,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.813: INFO: >>> kubeConfig: /tmp/kubeconfig-1263463171 STEP: Building a namespace api object, basename dns W0302 00:15:02.423079 214 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.423: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-650.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-650.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-650.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-650.svc.cluster.local;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-650.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-650.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-650.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-650.svc.cluster.local;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 2 00:16:10.517: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local from pod dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888: the server could not find the requested resource (get pods dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888) Mar 2 00:16:10.519: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local from pod dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888: the server could not find the requested resource (get pods dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888) Mar 2 00:16:10.521: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-650.svc.cluster.local from pod dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888: the server could not find the requested resource (get pods dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888) Mar 2 00:16:10.523: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-650.svc.cluster.local from pod dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888: the server could not find the requested resource (get pods dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888) Mar 2 00:16:10.525: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local from pod dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888: the server could not find the requested resource (get pods dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888) Mar 2 00:16:10.526: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local from pod dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888: the server could not find the requested resource (get pods dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888) Mar 2 00:16:10.528: INFO: Unable to read jessie_udp@dns-test-service-2.dns-650.svc.cluster.local from pod dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888: the server could not find the requested resource (get pods dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888) Mar 2 00:16:10.530: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-650.svc.cluster.local from pod dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888: the server could not find the requested resource (get pods dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888) Mar 2 00:16:10.530: INFO: Lookups using dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local wheezy_udp@dns-test-service-2.dns-650.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-650.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-650.svc.cluster.local jessie_udp@dns-test-service-2.dns-650.svc.cluster.local jessie_tcp@dns-test-service-2.dns-650.svc.cluster.local] Mar 2 00:16:15.545: INFO: DNS probes using dns-650/dns-test-78f3df98-7c95-4ce8-9790-bc1597f12888 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:15.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-650" for this suite. • [SLOW TEST:73.756 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":1,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.695: INFO: >>> kubeConfig: /tmp/kubeconfig-1252640407 STEP: Building a namespace api object, basename downward-api W0302 00:15:01.800185 280 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.800: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:15:01.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842" in namespace "downward-api-8252" to be "Succeeded or Failed" Mar 2 00:15:01.907: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 21.515261ms Mar 2 00:15:03.910: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02474793s Mar 2 00:15:05.912: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027091826s Mar 2 00:15:07.915: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03009289s Mar 2 00:15:09.917: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032056556s Mar 2 00:15:11.920: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 10.034597252s Mar 2 00:15:13.922: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 12.036727077s Mar 2 00:15:15.925: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 14.039404334s Mar 2 00:15:17.928: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 16.042198504s Mar 2 00:15:19.931: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 18.046026708s Mar 2 00:15:21.934: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 20.04835621s Mar 2 00:15:23.936: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 22.050296444s Mar 2 00:15:25.938: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 24.053020767s Mar 2 00:15:27.941: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 26.055873876s Mar 2 00:15:29.944: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 28.058220145s Mar 2 00:15:31.947: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 30.061494326s Mar 2 00:15:33.950: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 32.06490067s Mar 2 00:15:35.953: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 34.067523697s Mar 2 00:15:37.956: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 36.070360602s Mar 2 00:15:39.958: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 38.072870346s Mar 2 00:15:41.960: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 40.074696575s Mar 2 00:15:43.962: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 42.076987877s Mar 2 00:15:45.965: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 44.079791394s Mar 2 00:15:47.968: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 46.082535778s Mar 2 00:15:49.971: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 48.08536817s Mar 2 00:15:51.974: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 50.088165306s Mar 2 00:15:53.976: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 52.090647485s Mar 2 00:15:55.980: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 54.094409694s Mar 2 00:15:57.983: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 56.097159483s Mar 2 00:15:59.986: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 58.10038816s Mar 2 00:16:01.988: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.102644153s Mar 2 00:16:03.990: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.104671983s Mar 2 00:16:05.992: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.106949538s Mar 2 00:16:07.996: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.110171637s Mar 2 00:16:09.999: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.113435274s Mar 2 00:16:12.007: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.121732416s Mar 2 00:16:14.009: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.123909622s Mar 2 00:16:16.012: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m14.12704626s STEP: Saw pod success Mar 2 00:16:16.012: INFO: Pod "downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842" satisfied condition "Succeeded or Failed" Mar 2 00:16:16.014: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842 container client-container: STEP: delete the pod Mar 2 00:16:16.024: INFO: Waiting for pod downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842 to disappear Mar 2 00:16:16.026: INFO: Pod downwardapi-volume-482d1885-b335-4318-933b-e324ea49e842 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:16.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8252" for this suite. • [SLOW TEST:74.337 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.697: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename var-expansion W0302 00:15:01.980167 425 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.980: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test substitution in container's command Mar 2 00:15:02.029: INFO: Waiting up to 5m0s for pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31" in namespace "var-expansion-9342" to be "Succeeded or Failed" Mar 2 00:15:02.041: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 11.970688ms Mar 2 00:15:04.043: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013952786s Mar 2 00:15:06.045: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016303254s Mar 2 00:15:08.047: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018387839s Mar 2 00:15:10.050: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021704775s Mar 2 00:15:12.052: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023629693s Mar 2 00:15:14.056: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026795596s Mar 2 00:15:16.058: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 14.02977166s Mar 2 00:15:18.061: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 16.032679596s Mar 2 00:15:20.064: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 18.035767888s Mar 2 00:15:22.067: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 20.038444137s Mar 2 00:15:24.069: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 22.040468572s Mar 2 00:15:26.072: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 24.043172037s Mar 2 00:15:28.075: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 26.046249523s Mar 2 00:15:30.078: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 28.049323363s Mar 2 00:15:32.082: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 30.053262517s Mar 2 00:15:34.084: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 32.055368413s Mar 2 00:15:36.087: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 34.058558047s Mar 2 00:15:38.091: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 36.061819938s Mar 2 00:15:40.094: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 38.064804264s Mar 2 00:15:42.096: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 40.067481399s Mar 2 00:15:44.098: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 42.069620613s Mar 2 00:15:46.101: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 44.072544058s Mar 2 00:15:48.106: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 46.077256438s Mar 2 00:15:50.109: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 48.080447492s Mar 2 00:15:52.112: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 50.083057083s Mar 2 00:15:54.114: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 52.084919285s Mar 2 00:15:56.117: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 54.088008388s Mar 2 00:15:58.119: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 56.090738049s Mar 2 00:16:00.123: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 58.093800001s Mar 2 00:16:02.125: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.096461684s Mar 2 00:16:04.127: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.098416857s Mar 2 00:16:06.130: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.101782047s Mar 2 00:16:08.134: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.104864071s Mar 2 00:16:10.137: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.108010797s Mar 2 00:16:12.139: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.110645382s Mar 2 00:16:14.142: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.113354549s Mar 2 00:16:16.145: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m14.115869117s STEP: Saw pod success Mar 2 00:16:16.145: INFO: Pod "var-expansion-35206066-209e-4652-86dc-81b663f66e31" satisfied condition "Succeeded or Failed" Mar 2 00:16:16.146: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod var-expansion-35206066-209e-4652-86dc-81b663f66e31 container dapi-container: STEP: delete the pod Mar 2 00:16:16.155: INFO: Waiting for pod var-expansion-35206066-209e-4652-86dc-81b663f66e31 to disappear Mar 2 00:16:16.158: INFO: Pod var-expansion-35206066-209e-4652-86dc-81b663f66e31 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:16.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9342" for this suite. • [SLOW TEST:74.465 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":26,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.860: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: Building a namespace api object, basename configmap W0302 00:15:02.464207 340 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.464: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap configmap-2465/configmap-test-2a227311-d041-4528-8d4a-4a540ba1313d STEP: Creating a pod to test consume configMaps Mar 2 00:15:02.543: INFO: Waiting up to 5m0s for pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9" in namespace "configmap-2465" to be "Succeeded or Failed" Mar 2 00:15:02.561: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.275372ms Mar 2 00:15:04.563: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020160889s Mar 2 00:15:06.565: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022276485s Mar 2 00:15:08.567: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024285551s Mar 2 00:15:10.570: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027143939s Mar 2 00:15:12.572: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028913755s Mar 2 00:15:14.574: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030668092s Mar 2 00:15:16.576: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033543158s Mar 2 00:15:18.579: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036117865s Mar 2 00:15:20.582: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.038887474s Mar 2 00:15:22.584: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.041035605s Mar 2 00:15:24.586: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.043382424s Mar 2 00:15:26.588: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.045399659s Mar 2 00:15:28.591: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.048206745s Mar 2 00:15:30.594: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.050914884s Mar 2 00:15:32.597: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.053939195s Mar 2 00:15:34.599: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.056472515s Mar 2 00:15:36.602: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.059054118s Mar 2 00:15:38.604: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.061394453s Mar 2 00:15:40.607: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 38.064133677s Mar 2 00:15:42.611: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 40.067795614s Mar 2 00:15:44.612: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 42.069304344s Mar 2 00:15:46.615: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 44.072015338s Mar 2 00:15:48.618: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 46.074570176s Mar 2 00:15:50.620: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 48.077029644s Mar 2 00:15:52.623: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 50.08034091s Mar 2 00:15:54.626: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 52.082683096s Mar 2 00:15:56.628: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 54.085284546s Mar 2 00:15:58.630: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 56.087444157s Mar 2 00:16:00.633: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.090480844s Mar 2 00:16:02.636: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.093132141s Mar 2 00:16:04.639: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.095744013s Mar 2 00:16:06.641: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.098540016s Mar 2 00:16:08.644: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.101384471s Mar 2 00:16:10.647: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.104216472s Mar 2 00:16:12.650: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.107092919s Mar 2 00:16:14.652: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.1094556s Mar 2 00:16:16.656: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m14.112780409s STEP: Saw pod success Mar 2 00:16:16.656: INFO: Pod "pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9" satisfied condition "Succeeded or Failed" Mar 2 00:16:16.658: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9 container env-test: STEP: delete the pod Mar 2 00:16:17.185: INFO: Waiting for pod pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9 to disappear Mar 2 00:16:17.188: INFO: Pod pod-configmaps-c87f1183-df18-4013-bb71-0bd5c707f5b9 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:17.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2465" for this suite. • [SLOW TEST:75.337 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.799: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: Building a namespace api object, basename projected W0302 00:15:02.335947 366 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.336: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:15:02.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc" in namespace "projected-2142" to be "Succeeded or Failed" Mar 2 00:15:02.377: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.829087ms Mar 2 00:15:04.379: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014545583s Mar 2 00:15:06.383: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01828034s Mar 2 00:15:08.386: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021348324s Mar 2 00:15:10.389: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024280392s Mar 2 00:15:12.392: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02738514s Mar 2 00:15:14.394: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029990581s Mar 2 00:15:16.398: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033294229s Mar 2 00:15:18.401: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.03632981s Mar 2 00:15:20.403: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.038542952s Mar 2 00:15:22.406: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.041922789s Mar 2 00:15:24.409: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.045036212s Mar 2 00:15:26.413: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.048514399s Mar 2 00:15:28.416: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.051711584s Mar 2 00:15:30.419: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.054591057s Mar 2 00:15:32.422: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.058098855s Mar 2 00:15:34.426: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.061302286s Mar 2 00:15:36.428: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.063336879s Mar 2 00:15:38.431: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.066114814s Mar 2 00:15:40.434: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 38.069250279s Mar 2 00:15:42.437: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 40.072434087s Mar 2 00:15:44.439: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.074243727s Mar 2 00:15:46.441: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 44.077096631s Mar 2 00:15:48.445: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 46.080830765s Mar 2 00:15:50.449: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 48.084423894s Mar 2 00:15:52.451: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 50.087019481s Mar 2 00:15:54.453: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.088871696s Mar 2 00:15:56.456: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 54.091536616s Mar 2 00:15:58.459: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.094788309s Mar 2 00:16:00.462: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 58.097668277s Mar 2 00:16:02.464: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.099530024s Mar 2 00:16:04.466: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.101420988s Mar 2 00:16:06.468: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.103798064s Mar 2 00:16:08.471: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.107039321s Mar 2 00:16:10.475: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.110599394s Mar 2 00:16:12.478: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.113616116s Mar 2 00:16:14.481: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.116197051s Mar 2 00:16:16.484: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.119449361s Mar 2 00:16:18.487: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m16.122664971s STEP: Saw pod success Mar 2 00:16:18.487: INFO: Pod "downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc" satisfied condition "Succeeded or Failed" Mar 2 00:16:18.489: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc container client-container: STEP: delete the pod Mar 2 00:16:18.498: INFO: Waiting for pod downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc to disappear Mar 2 00:16:18.501: INFO: Pod downwardapi-volume-bb411b4b-346c-4bb8-879a-f34f4c03cadc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:18.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2142" for this suite. • [SLOW TEST:76.708 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":27,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.695: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 STEP: Building a namespace api object, basename emptydir W0302 00:15:01.850176 308 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.850: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir volume type on node default medium Mar 2 00:15:01.938: INFO: Waiting up to 5m0s for pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca" in namespace "emptydir-1092" to be "Succeeded or Failed" Mar 2 00:15:01.956: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 17.234958ms Mar 2 00:15:03.959: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020159674s Mar 2 00:15:05.961: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023003568s Mar 2 00:15:07.966: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027600521s Mar 2 00:15:09.968: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029862815s Mar 2 00:15:12.023: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.084681554s Mar 2 00:15:14.025: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.086851868s Mar 2 00:15:16.027: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.088947341s Mar 2 00:15:18.030: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.091439268s Mar 2 00:15:20.032: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 18.094094166s Mar 2 00:15:22.035: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 20.096959344s Mar 2 00:15:24.038: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 22.09964306s Mar 2 00:15:26.040: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 24.102016909s Mar 2 00:15:28.043: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 26.104900497s Mar 2 00:15:30.047: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 28.108809051s Mar 2 00:15:32.050: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 30.112012338s Mar 2 00:15:34.054: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 32.115362886s Mar 2 00:15:36.057: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 34.118223315s Mar 2 00:15:38.060: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 36.121374927s Mar 2 00:15:40.063: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 38.124816861s Mar 2 00:15:42.067: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 40.128527197s Mar 2 00:15:44.069: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 42.130556552s Mar 2 00:15:46.072: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 44.133440572s Mar 2 00:15:48.075: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 46.136796077s Mar 2 00:15:50.078: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 48.139851994s Mar 2 00:15:52.081: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 50.142998814s Mar 2 00:15:54.085: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 52.146607852s Mar 2 00:15:56.088: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 54.149186616s Mar 2 00:15:58.090: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 56.152047005s Mar 2 00:16:00.093: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 58.154116292s Mar 2 00:16:02.095: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.156772766s Mar 2 00:16:04.097: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.158583144s Mar 2 00:16:06.100: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.161579404s Mar 2 00:16:08.103: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.164712013s Mar 2 00:16:10.106: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.167830405s Mar 2 00:16:12.109: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.170193736s Mar 2 00:16:14.111: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.172874709s Mar 2 00:16:16.115: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.17643969s Mar 2 00:16:18.118: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.179520983s Mar 2 00:16:20.123: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m18.184175794s STEP: Saw pod success Mar 2 00:16:20.123: INFO: Pod "pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca" satisfied condition "Succeeded or Failed" Mar 2 00:16:20.124: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca container test-container: STEP: delete the pod Mar 2 00:16:20.135: INFO: Waiting for pod pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca to disappear Mar 2 00:16:20.138: INFO: Pod pod-cfbfdcc0-f9fc-49ef-b0c5-0a4c5c8026ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:20.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1092" for this suite. • [SLOW TEST:78.449 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.809: INFO: >>> kubeConfig: /tmp/kubeconfig-1801756564 STEP: Building a namespace api object, basename downward-api W0302 00:15:02.363363 206 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.363: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:15:02.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33" in namespace "downward-api-5151" to be "Succeeded or Failed" Mar 2 00:15:02.409: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 11.811876ms Mar 2 00:15:04.423: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025465656s Mar 2 00:15:06.426: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028230242s Mar 2 00:15:08.428: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030395524s Mar 2 00:15:10.430: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032937652s Mar 2 00:15:12.433: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 10.035731821s Mar 2 00:15:14.436: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 12.038195443s Mar 2 00:15:16.439: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 14.041612737s Mar 2 00:15:18.442: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 16.044758086s Mar 2 00:15:20.445: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 18.047715221s Mar 2 00:15:22.447: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 20.049585443s Mar 2 00:15:24.449: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 22.051411684s Mar 2 00:15:26.451: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 24.053917736s Mar 2 00:15:28.454: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 26.056449087s Mar 2 00:15:30.456: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 28.0589594s Mar 2 00:15:32.460: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 30.062591063s Mar 2 00:15:34.463: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 32.065510284s Mar 2 00:15:36.466: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 34.068286996s Mar 2 00:15:38.468: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 36.070966565s Mar 2 00:15:40.471: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 38.073900658s Mar 2 00:15:42.474: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 40.077013963s Mar 2 00:15:44.476: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 42.078726379s Mar 2 00:15:46.479: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 44.081808066s Mar 2 00:15:48.482: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 46.084688111s Mar 2 00:15:50.485: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 48.087782262s Mar 2 00:15:52.488: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 50.09018872s Mar 2 00:15:54.490: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 52.092123442s Mar 2 00:15:56.492: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 54.094826104s Mar 2 00:15:58.495: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 56.09772635s Mar 2 00:16:00.498: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 58.100741184s Mar 2 00:16:02.501: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.103451232s Mar 2 00:16:04.504: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.106090946s Mar 2 00:16:06.506: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.108824029s Mar 2 00:16:08.508: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.110730973s Mar 2 00:16:10.512: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.114280807s Mar 2 00:16:12.515: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.117831763s Mar 2 00:16:14.518: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.120565258s Mar 2 00:16:16.521: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.124035381s Mar 2 00:16:18.525: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.127528969s Mar 2 00:16:20.528: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m18.130659569s STEP: Saw pod success Mar 2 00:16:20.528: INFO: Pod "downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33" satisfied condition "Succeeded or Failed" Mar 2 00:16:20.530: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33 container client-container: STEP: delete the pod Mar 2 00:16:20.538: INFO: Waiting for pod downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33 to disappear Mar 2 00:16:20.541: INFO: Pod downwardapi-volume-63bda82b-ba21-4437-98a8-b42f3e398a33 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:20.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5151" for this suite. • [SLOW TEST:78.737 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.818: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename emptydir W0302 00:15:02.408030 202 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.408: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 2 00:15:02.473: INFO: Waiting up to 5m0s for pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832" in namespace "emptydir-9071" to be "Succeeded or Failed" Mar 2 00:15:02.491: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 17.613737ms Mar 2 00:15:04.493: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019972081s Mar 2 00:15:06.496: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022888155s Mar 2 00:15:08.499: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025949056s Mar 2 00:15:10.502: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028698849s Mar 2 00:15:12.505: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031362891s Mar 2 00:15:14.507: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 12.034063233s Mar 2 00:15:16.509: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 14.036112892s Mar 2 00:15:18.513: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 16.039405772s Mar 2 00:15:20.515: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 18.041582976s Mar 2 00:15:22.517: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 20.044176441s Mar 2 00:15:24.520: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 22.047131635s Mar 2 00:15:26.524: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 24.050344779s Mar 2 00:15:28.526: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 26.053170941s Mar 2 00:15:30.530: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 28.056932799s Mar 2 00:15:32.533: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 30.059665767s Mar 2 00:15:34.537: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 32.063505206s Mar 2 00:15:36.539: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 34.065755339s Mar 2 00:15:38.542: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 36.068479302s Mar 2 00:15:40.545: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 38.071490542s Mar 2 00:15:42.548: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 40.074638713s Mar 2 00:15:44.551: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 42.077482286s Mar 2 00:15:46.554: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 44.080590424s Mar 2 00:15:48.558: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 46.084326704s Mar 2 00:15:50.561: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 48.087332969s Mar 2 00:15:52.564: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 50.09084714s Mar 2 00:15:54.567: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 52.093610906s Mar 2 00:15:56.570: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 54.096685584s Mar 2 00:15:58.573: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 56.099725736s Mar 2 00:16:00.575: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 58.102155691s Mar 2 00:16:02.579: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.105496105s Mar 2 00:16:04.581: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.107367802s Mar 2 00:16:06.583: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.10991948s Mar 2 00:16:08.586: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.112577291s Mar 2 00:16:10.591: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.117472719s Mar 2 00:16:12.594: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.120618386s Mar 2 00:16:14.596: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.122893041s Mar 2 00:16:16.599: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.126121335s Mar 2 00:16:18.602: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.128900198s Mar 2 00:16:20.606: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m18.132384579s STEP: Saw pod success Mar 2 00:16:20.606: INFO: Pod "pod-c24c1526-82ec-4f64-9633-c2da33b1b832" satisfied condition "Succeeded or Failed" Mar 2 00:16:20.607: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-c24c1526-82ec-4f64-9633-c2da33b1b832 container test-container: STEP: delete the pod Mar 2 00:16:20.622: INFO: Waiting for pod pod-c24c1526-82ec-4f64-9633-c2da33b1b832 to disappear Mar 2 00:16:20.625: INFO: Pod pod-c24c1526-82ec-4f64-9633-c2da33b1b832 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:20.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9071" for this suite. • [SLOW TEST:78.817 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:20.645: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Mar 2 00:16:20.671: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Mar 2 00:16:20.694: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:20.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-8709" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:20.726: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:186 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 2 00:16:20.753: INFO: starting watch STEP: patching STEP: updating Mar 2 00:16:20.759: INFO: waiting for watch events with expected annotations Mar 2 00:16:20.759: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:20.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-5505" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":3,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:09.157: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:16:09.505: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:16:11.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:13.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:15.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:16:18.526: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:18.529: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2832-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:21.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8238" for this suite. STEP: Destroying namespace "webhook-8238-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.498 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":3,"skipped":120,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.841: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename webhook W0302 00:15:02.509247 200 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.509: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Mar 2 00:15:02.849: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:15:02.860: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:15:04.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:06.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:08.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:10.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:12.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:14.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:16.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:18.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:20.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:22.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:24.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:26.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:28.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:30.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:32.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:34.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:36.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:38.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:40.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:42.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:44.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:46.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:48.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:50.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:52.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:54.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:56.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:58.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:00.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:02.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:04.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:06.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:08.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:10.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:12.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:14.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:16.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:18.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:16:21.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:21.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2719" for this suite. STEP: Destroying namespace "webhook-2719-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:80.099 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:21.958: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:21.970: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:22.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7372" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":2,"skipped":48,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:03.042: INFO: >>> kubeConfig: /tmp/kubeconfig-2945017463 STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Given a Pod with a 'name' label pod-adoption-release is created Mar 2 00:15:03.059: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:05.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:07.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:09.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:11.064: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:13.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:15.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:17.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:19.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:21.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:23.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:25.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:27.064: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:29.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:31.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:33.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:35.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:37.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:39.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:41.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:43.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:45.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:47.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:49.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:51.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:53.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:55.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:57.061: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:59.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:01.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:03.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:05.064: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:07.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:09.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:11.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:13.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:15.062: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:17.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:19.063: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:21.063: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 2 00:16:22.078: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:23.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6016" for this suite. • [SLOW TEST:80.053 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:12.735: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:15:12.750: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:14.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:16.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:18.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:20.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:22.752: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:24.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:26.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:28.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:30.755: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:32.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:34.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:36.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:38.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:40.755: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:42.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:44.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:46.755: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:48.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:50.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:52.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:54.755: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:56.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:58.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:00.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:02.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:04.752: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:06.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:08.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:10.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:12.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:14.752: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:16.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:18.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:20.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:22.753: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:24.754: INFO: The status of Pod test-webserver-3a66c094-3306-432c-9543-57dcc4e2e410 is Running (Ready = true) Mar 2 00:16:24.755: INFO: Container started at 2023-03-02 00:15:34 +0000 UTC, pod became ready at 2023-03-02 00:15:51 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:24.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6217" for this suite. • [SLOW TEST:72.025 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:24.763: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:24.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3805" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:02.626: INFO: >>> kubeConfig: /tmp/kubeconfig-1722668237 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name projected-configmap-test-volume-1b56cde9-7fce-4691-9137-6404e3cb8a29 STEP: Creating a pod to test consume configMaps Mar 2 00:15:02.794: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af" in namespace "projected-1989" to be "Succeeded or Failed" Mar 2 00:15:02.798: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.941289ms Mar 2 00:15:04.800: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005660843s Mar 2 00:15:06.803: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008265358s Mar 2 00:15:08.804: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010051452s Mar 2 00:15:10.808: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013558473s Mar 2 00:15:12.810: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015761071s Mar 2 00:15:14.813: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 12.018588496s Mar 2 00:15:16.815: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 14.020954748s Mar 2 00:15:18.818: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 16.023518426s Mar 2 00:15:20.857: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 18.063169601s Mar 2 00:15:22.861: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 20.066684537s Mar 2 00:15:24.864: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 22.069650253s Mar 2 00:15:26.867: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 24.072452379s Mar 2 00:15:28.870: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 26.075865045s Mar 2 00:15:30.874: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 28.079458877s Mar 2 00:15:32.877: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 30.083016702s Mar 2 00:15:34.881: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 32.086287663s Mar 2 00:15:36.884: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 34.089728959s Mar 2 00:15:38.886: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 36.09189764s Mar 2 00:15:40.889: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 38.094493875s Mar 2 00:15:42.891: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 40.096801002s Mar 2 00:15:44.894: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 42.099475596s Mar 2 00:15:46.896: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 44.102159751s Mar 2 00:15:48.900: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 46.105203158s Mar 2 00:15:50.902: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 48.108168177s Mar 2 00:15:52.905: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 50.110995776s Mar 2 00:15:54.909: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 52.11493775s Mar 2 00:15:56.912: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 54.117707401s Mar 2 00:15:58.915: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 56.120504754s Mar 2 00:16:00.918: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 58.123307218s Mar 2 00:16:02.921: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.126322587s Mar 2 00:16:04.924: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.129246072s Mar 2 00:16:06.927: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.132680548s Mar 2 00:16:08.929: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.13491696s Mar 2 00:16:10.931: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.136863393s Mar 2 00:16:12.935: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.1402401s Mar 2 00:16:14.938: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.143473376s Mar 2 00:16:16.941: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.146223845s Mar 2 00:16:18.944: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.14931394s Mar 2 00:16:20.946: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.152070261s Mar 2 00:16:22.950: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.155338896s Mar 2 00:16:24.953: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m22.158887381s STEP: Saw pod success Mar 2 00:16:24.953: INFO: Pod "pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af" satisfied condition "Succeeded or Failed" Mar 2 00:16:24.955: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af container agnhost-container: STEP: delete the pod Mar 2 00:16:24.966: INFO: Waiting for pod pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af to disappear Mar 2 00:16:24.968: INFO: Pod pod-projected-configmaps-4b20b29a-32a9-4c44-8de7-aeab71f386af no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:24.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1989" for this suite. • [SLOW TEST:82.348 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":28,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.655: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename downward-api W0302 00:15:01.740666 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.740: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:15:01.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a" in namespace "downward-api-6149" to be "Succeeded or Failed" Mar 2 00:15:01.839: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.555244ms Mar 2 00:15:03.841: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012260513s Mar 2 00:15:05.844: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014682024s Mar 2 00:15:07.847: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017996102s Mar 2 00:15:09.850: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020945399s Mar 2 00:15:11.853: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024274671s Mar 2 00:15:13.856: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026657907s Mar 2 00:15:15.860: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.030658444s Mar 2 00:15:17.864: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.035330357s Mar 2 00:15:19.868: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.038850527s Mar 2 00:15:21.871: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.04227196s Mar 2 00:15:23.874: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.045010419s Mar 2 00:15:25.877: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.047687332s Mar 2 00:15:27.880: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.051302206s Mar 2 00:15:29.884: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.054811712s Mar 2 00:15:31.887: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.057994821s Mar 2 00:15:33.890: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.06079882s Mar 2 00:15:35.892: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.063195318s Mar 2 00:15:37.895: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.066187277s Mar 2 00:15:39.898: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.069137196s Mar 2 00:15:41.901: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.072029989s Mar 2 00:15:43.903: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.074293918s Mar 2 00:15:45.906: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.077435075s Mar 2 00:15:47.910: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.080840304s Mar 2 00:15:49.914: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 48.084663205s Mar 2 00:15:51.917: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.087998923s Mar 2 00:15:53.919: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.089813956s Mar 2 00:15:55.922: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 54.09281508s Mar 2 00:15:57.925: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.095897149s Mar 2 00:15:59.928: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 58.098837999s Mar 2 00:16:01.931: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.101714369s Mar 2 00:16:03.933: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.103643222s Mar 2 00:16:05.936: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.10663909s Mar 2 00:16:07.939: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.110071668s Mar 2 00:16:09.942: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.113235315s Mar 2 00:16:11.945: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.116268645s Mar 2 00:16:13.947: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.118367392s Mar 2 00:16:15.951: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.121733125s Mar 2 00:16:17.954: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.125151538s Mar 2 00:16:19.958: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.128906319s Mar 2 00:16:21.961: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.131997111s Mar 2 00:16:23.964: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.135096668s Mar 2 00:16:25.968: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m24.138912446s STEP: Saw pod success Mar 2 00:16:25.968: INFO: Pod "downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a" satisfied condition "Succeeded or Failed" Mar 2 00:16:25.972: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a container client-container: STEP: delete the pod Mar 2 00:16:25.991: INFO: Waiting for pod downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a to disappear Mar 2 00:16:25.998: INFO: Pod downwardapi-volume-1d6d8c8f-91f7-489a-b016-0792d1368d1a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:25.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6149" for this suite. • [SLOW TEST:84.357 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":62,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.879: INFO: >>> kubeConfig: /tmp/kubeconfig-3780238515 STEP: Building a namespace api object, basename kubectl W0302 00:15:02.570489 199 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.570: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:15:02.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3780238515 --namespace=kubectl-1674 create -f -' Mar 2 00:15:03.282: INFO: stderr: "" Mar 2 00:15:03.282: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Mar 2 00:15:03.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3780238515 --namespace=kubectl-1674 create -f -' Mar 2 00:15:03.881: INFO: stderr: "" Mar 2 00:15:03.881: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Mar 2 00:15:04.883: INFO: Selector matched 0 pods for map[app:agnhost] Mar 2 00:15:04.883: INFO: Found 0 / 1 Mar 2 00:15:05.883: INFO: Selector matched 0 pods for map[app:agnhost] Mar 2 00:15:05.884: INFO: Found 0 / 1 Mar 2 00:15:06.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:06.883: INFO: Found 0 / 1 Mar 2 00:15:07.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:07.883: INFO: Found 0 / 1 Mar 2 00:15:08.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:08.884: INFO: Found 0 / 1 Mar 2 00:15:09.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:09.883: INFO: Found 0 / 1 Mar 2 00:15:10.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:10.883: INFO: Found 0 / 1 Mar 2 00:15:11.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:11.884: INFO: Found 0 / 1 Mar 2 00:15:12.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:12.883: INFO: Found 0 / 1 Mar 2 00:15:13.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:13.884: INFO: Found 0 / 1 Mar 2 00:15:14.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:14.885: INFO: Found 0 / 1 Mar 2 00:15:15.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:15.883: INFO: Found 0 / 1 Mar 2 00:15:16.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:16.884: INFO: Found 0 / 1 Mar 2 00:15:17.885: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:17.885: INFO: Found 0 / 1 Mar 2 00:15:18.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:18.883: INFO: Found 0 / 1 Mar 2 00:15:19.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:19.883: INFO: Found 0 / 1 Mar 2 00:15:20.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:20.883: INFO: Found 0 / 1 Mar 2 00:15:21.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:21.884: INFO: Found 0 / 1 Mar 2 00:15:22.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:22.884: INFO: Found 0 / 1 Mar 2 00:15:23.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:23.883: INFO: Found 0 / 1 Mar 2 00:15:24.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:24.884: INFO: Found 0 / 1 Mar 2 00:15:25.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:25.883: INFO: Found 0 / 1 Mar 2 00:15:26.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:26.883: INFO: Found 0 / 1 Mar 2 00:15:27.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:27.883: INFO: Found 0 / 1 Mar 2 00:15:28.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:28.883: INFO: Found 0 / 1 Mar 2 00:15:29.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:29.883: INFO: Found 0 / 1 Mar 2 00:15:30.885: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:30.885: INFO: Found 0 / 1 Mar 2 00:15:31.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:31.883: INFO: Found 0 / 1 Mar 2 00:15:32.885: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:32.885: INFO: Found 0 / 1 Mar 2 00:15:33.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:33.884: INFO: Found 0 / 1 Mar 2 00:15:34.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:34.884: INFO: Found 0 / 1 Mar 2 00:15:35.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:35.884: INFO: Found 0 / 1 Mar 2 00:15:36.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:36.884: INFO: Found 0 / 1 Mar 2 00:15:37.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:37.883: INFO: Found 0 / 1 Mar 2 00:15:38.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:38.883: INFO: Found 0 / 1 Mar 2 00:15:39.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:39.883: INFO: Found 0 / 1 Mar 2 00:15:40.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:40.883: INFO: Found 0 / 1 Mar 2 00:15:41.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:41.883: INFO: Found 0 / 1 Mar 2 00:15:42.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:42.883: INFO: Found 0 / 1 Mar 2 00:15:43.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:43.884: INFO: Found 0 / 1 Mar 2 00:15:44.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:44.884: INFO: Found 0 / 1 Mar 2 00:15:45.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:45.883: INFO: Found 0 / 1 Mar 2 00:15:46.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:46.883: INFO: Found 0 / 1 Mar 2 00:15:47.885: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:47.885: INFO: Found 0 / 1 Mar 2 00:15:48.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:48.884: INFO: Found 0 / 1 Mar 2 00:15:49.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:49.884: INFO: Found 0 / 1 Mar 2 00:15:50.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:50.883: INFO: Found 0 / 1 Mar 2 00:15:51.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:51.884: INFO: Found 0 / 1 Mar 2 00:15:52.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:52.883: INFO: Found 0 / 1 Mar 2 00:15:53.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:53.883: INFO: Found 0 / 1 Mar 2 00:15:54.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:54.884: INFO: Found 0 / 1 Mar 2 00:15:55.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:55.884: INFO: Found 0 / 1 Mar 2 00:15:56.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:56.883: INFO: Found 0 / 1 Mar 2 00:15:57.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:57.883: INFO: Found 0 / 1 Mar 2 00:15:58.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:58.883: INFO: Found 0 / 1 Mar 2 00:15:59.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:15:59.883: INFO: Found 0 / 1 Mar 2 00:16:00.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:00.883: INFO: Found 0 / 1 Mar 2 00:16:01.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:01.883: INFO: Found 0 / 1 Mar 2 00:16:02.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:02.884: INFO: Found 0 / 1 Mar 2 00:16:03.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:03.883: INFO: Found 0 / 1 Mar 2 00:16:04.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:04.884: INFO: Found 0 / 1 Mar 2 00:16:05.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:05.883: INFO: Found 0 / 1 Mar 2 00:16:06.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:06.884: INFO: Found 0 / 1 Mar 2 00:16:07.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:07.883: INFO: Found 0 / 1 Mar 2 00:16:08.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:08.884: INFO: Found 0 / 1 Mar 2 00:16:09.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:09.884: INFO: Found 0 / 1 Mar 2 00:16:10.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:10.883: INFO: Found 0 / 1 Mar 2 00:16:11.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:11.883: INFO: Found 0 / 1 Mar 2 00:16:12.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:12.884: INFO: Found 0 / 1 Mar 2 00:16:13.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:13.883: INFO: Found 0 / 1 Mar 2 00:16:14.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:14.883: INFO: Found 0 / 1 Mar 2 00:16:15.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:15.884: INFO: Found 0 / 1 Mar 2 00:16:16.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:16.883: INFO: Found 0 / 1 Mar 2 00:16:17.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:17.884: INFO: Found 0 / 1 Mar 2 00:16:18.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:18.884: INFO: Found 0 / 1 Mar 2 00:16:19.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:19.883: INFO: Found 0 / 1 Mar 2 00:16:20.883: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:20.883: INFO: Found 0 / 1 Mar 2 00:16:21.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:21.884: INFO: Found 0 / 1 Mar 2 00:16:22.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:22.884: INFO: Found 0 / 1 Mar 2 00:16:23.885: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:23.885: INFO: Found 0 / 1 Mar 2 00:16:24.885: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:24.885: INFO: Found 0 / 1 Mar 2 00:16:25.887: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:25.887: INFO: Found 0 / 1 Mar 2 00:16:26.887: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:26.887: INFO: Found 0 / 1 Mar 2 00:16:27.884: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:27.884: INFO: Found 1 / 1 Mar 2 00:16:27.884: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 2 00:16:27.885: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:16:27.886: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 2 00:16:27.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3780238515 --namespace=kubectl-1674 describe pod agnhost-primary-jtpjp' Mar 2 00:16:27.934: INFO: stderr: "" Mar 2 00:16:27.936: INFO: stdout: "Name: agnhost-primary-jtpjp\nNamespace: kubectl-1674\nPriority: 0\nNode: k3s-server-1-mid5l7/172.17.0.12\nStart Time: Thu, 02 Mar 2023 00:15:06 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.42.0.83\nIPs:\n IP: 10.42.0.83\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://5034d34d3d319c09d5e95b0fdf6660714bdd7f64b61c593547c696fdb10306f6\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 02 Mar 2023 00:15:24 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jkz28 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-jkz28:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 81s default-scheduler Successfully assigned kubectl-1674/agnhost-primary-jtpjp to k3s-server-1-mid5l7\n Normal Pulled 63s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 63s kubelet Created container agnhost-primary\n Normal Started 63s kubelet Started container agnhost-primary\n" Mar 2 00:16:27.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3780238515 --namespace=kubectl-1674 describe rc agnhost-primary' Mar 2 00:16:27.986: INFO: stderr: "" Mar 2 00:16:27.987: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1674\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 81s replication-controller Created pod: agnhost-primary-jtpjp\n" Mar 2 00:16:27.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3780238515 --namespace=kubectl-1674 describe service agnhost-primary' Mar 2 00:16:28.035: INFO: stderr: "" Mar 2 00:16:28.035: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1674\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.43.58.145\nIPs: 10.43.58.145\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.42.0.83:6379\nSession Affinity: None\nEvents: \n" Mar 2 00:16:28.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3780238515 --namespace=kubectl-1674 describe node k3s-server-1-mid5l7' Mar 2 00:16:28.124: INFO: stderr: "" Mar 2 00:16:28.124: INFO: stdout: "Name: k3s-server-1-mid5l7\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=k3s\n beta.kubernetes.io/os=linux\n egress.k3s.io/cluster=true\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k3s-server-1-mid5l7\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=true\n node-role.kubernetes.io/master=true\n node.kubernetes.io/instance-type=k3s\nAnnotations: flannel.alpha.coreos.com/backend-data: {\"VNI\":1,\"VtepMAC\":\"ce:a5:ad:56:3f:67\"}\n flannel.alpha.coreos.com/backend-type: vxlan\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 172.17.0.12\n k3s.io/hostname: k3s-server-1-mid5l7\n k3s.io/internal-ip: 172.17.0.12\n k3s.io/node-args: [\"server\",\"--disable\",\"traefik\"]\n k3s.io/node-config-hash: 7CVLL6YFSZDXXCN2STAW7UVXT5NAWIE2QDGPTPYGFHYIOL6A6JIQ====\n k3s.io/node-env: {\"K3S_DEBUG\":\"true\",\"K3S_TOKEN\":\"********\"}\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Thu, 02 Mar 2023 00:14:22 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: k3s-server-1-mid5l7\n AcquireTime: \n RenewTime: Thu, 02 Mar 2023 00:16:24 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 02 Mar 2023 00:15:24 +0000 Thu, 02 Mar 2023 00:14:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 02 Mar 2023 00:15:24 +0000 Thu, 02 Mar 2023 00:14:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 02 Mar 2023 00:15:24 +0000 Thu, 02 Mar 2023 00:14:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 02 Mar 2023 00:15:24 +0000 Thu, 02 Mar 2023 00:14:32 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.12\n Hostname: k3s-server-1-mid5l7\nCapacity:\n cpu: 48\n ephemeral-storage: 921594604Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65435484Ki\n pods: 110\nAllocatable:\n cpu: 48\n ephemeral-storage: 896527230069\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65435484Ki\n pods: 110\nSystem Info:\n Machine ID: \n System UUID: 4c4c4544-0053-3110-8054-c6c04f4d4833\n Boot ID: 93aed2db-6b51-4fd3-a622-d32bd6e1484a\n Kernel Version: 5.15.0-53-generic\n OS Image: K3s dev\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.16-k3s2-1-22\n Kubelet Version: v1.23.16+k3s1\n Kube-Proxy Version: v1.23.16+k3s1\nPodCIDR: 10.42.0.0/24\nPodCIDRs: 10.42.0.0/24\nProviderID: k3s://k3s-server-1-mid5l7\nNon-terminated Pods: (42 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system local-path-provisioner-5f8bbd68f9-dn7tg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 116s\n kube-system coredns-5cfbb9f57c-4jk49 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 116s\n projected-8087 downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61s\n statefulset-3679 ss-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 83s\n deployment-9669 test-rolling-update-controller-pszw7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n deployment-5316 webserver-deployment-5d9fdcc779-tnvvs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n deployment-9669 test-rolling-update-deployment-8656fc4b57-hqj4l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17s\n services-7722 externalsvc-5wlvx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 84s\n deployment-5316 webserver-deployment-5d9fdcc779-kp2z2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n statefulset-7827 ss2-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n svcaccounts-6009 pod-service-account-1007c0c2-0ac7-4aab-9b36-b8e293b7ef91 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15s\n container-probe-5565 liveness-56957831-0b06-4189-9a80-834657d2d97d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n pod-network-test-1529 netserver-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n subpath-6792 pod-subpath-test-configmap-4f9d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13s\n deployment-5316 webserver-deployment-5d9fdcc779-xp9l5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n services-4876 affinity-nodeport-n64cx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s\n services-4876 affinity-nodeport-p9bh7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s\n projected-2596 pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12s\n kubectl-6231 e2e-test-httpd-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11s\n var-expansion-4249 var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10s\n deployment-5316 webserver-deployment-5d9fdcc779-zsgqs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n secrets-4751 pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8s\n pod-network-test-925 netserver-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n sonobuoy sonobuoy 0 (0%) 0 (0%) 0 (0%) 0 (0%) 94s\n init-container-3828 pod-init-5fd11ea1-7d7b-466b-be47-d5b388c0d6e4 100m (0%) 100m (0%) 0 (0%) 0 (0%) 8s\n cronjob-1335 concurrent-27961936-pwv5f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28s\n replicaset-6016 pod-adoption-release 0 (0%) 0 (0%) 0 (0%) 0 (0%) 85s\n replicaset-6016 pod-adoption-release-nhf6v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6s\n svc-latency-3253 svc-latency-rc-xj9gn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n projected-6498 pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5s\n replicaset-6179 test-rs-qltbk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n container-probe-4531 liveness-5eb30b52-1026-42cd-9f18-5869407cb659 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4s\n replicaset-8117 test-rs-v5gsf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 87s\n var-expansion-6892 var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4s\n replicaset-6179 test-rs-wzppd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3s\n job-8952 foo-22h4q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n deployment-5316 webserver-deployment-5d9fdcc779-ftxm8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n deployment-5316 webserver-deployment-566f96c878-t484t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s\n deployment-5316 webserver-deployment-566f96c878-96w6c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s\n deployment-5316 webserver-deployment-566f96c878-clwm8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s\n projected-4270 labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 86s\n kubectl-1674 agnhost-primary-jtpjp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 82s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 200m (0%) 100m (0%)\n memory 70Mi (0%) 170Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 2m4s kube-proxy \n Normal Starting 2m6s kubelet Starting kubelet.\n Warning InvalidDiskCapacity 2m6s kubelet invalid capacity 0 on image filesystem\n Normal NodeAllocatableEnforced 2m6s kubelet Updated Node Allocatable limit across pods\n Normal NodeHasSufficientMemory 2m6s (x2 over 2m6s) kubelet Node k3s-server-1-mid5l7 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 2m6s (x2 over 2m6s) kubelet Node k3s-server-1-mid5l7 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 2m6s (x2 over 2m6s) kubelet Node k3s-server-1-mid5l7 status is now: NodeHasSufficientPID\n Normal NodeReady 116s kubelet Node k3s-server-1-mid5l7 status is now: NodeReady\n" Mar 2 00:16:28.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3780238515 --namespace=kubectl-1674 describe namespace kubectl-1674' Mar 2 00:16:28.181: INFO: stderr: "" Mar 2 00:16:28.181: INFO: stdout: "Name: kubectl-1674\nLabels: e2e-framework=kubectl\n e2e-run=b4b0c711-27a3-4827-aa33-e0614493d2db\n kubernetes.io/metadata.name=kubectl-1674\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:28.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1674" for this suite. • [SLOW TEST:86.310 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1109 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.796: INFO: >>> kubeConfig: /tmp/kubeconfig-3226875953 STEP: Building a namespace api object, basename projected W0302 00:15:02.293150 338 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.293: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating the pod Mar 2 00:15:02.340: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:04.341: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:06.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:08.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:10.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:12.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:14.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:16.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:18.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:20.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:22.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:24.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:26.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:28.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:30.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:32.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:34.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:36.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:38.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:40.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:42.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:44.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:46.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:48.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:50.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:52.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:54.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:56.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:58.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:00.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:02.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:04.341: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:06.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:08.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:10.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:12.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:14.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:16.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:18.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:20.342: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:22.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:24.343: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:26.344: INFO: The status of Pod labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c is Running (Ready = true) Mar 2 00:16:26.868: INFO: Successfully updated pod "labelsupdate5c643816-2fa2-4281-90f5-1a0e23b2420c" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:28.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4270" for this suite. • [SLOW TEST:87.109 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.919: INFO: >>> kubeConfig: /tmp/kubeconfig-1867053252 STEP: Building a namespace api object, basename container-runtime W0302 00:15:02.621980 291 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.622: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 2 00:16:28.915: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:28.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7100" for this suite. • [SLOW TEST:87.013 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:27.806: INFO: >>> kubeConfig: /tmp/kubeconfig-178066528 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:15:27.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285" in namespace "projected-8087" to be "Succeeded or Failed" Mar 2 00:15:27.826: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061393ms Mar 2 00:15:29.829: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004883434s Mar 2 00:15:31.832: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00786013s Mar 2 00:15:33.835: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010532029s Mar 2 00:15:35.837: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012521914s Mar 2 00:15:37.839: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015147948s Mar 2 00:15:39.842: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 12.018022904s Mar 2 00:15:41.845: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 14.020979649s Mar 2 00:15:43.847: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 16.023264868s Mar 2 00:15:45.850: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 18.025866731s Mar 2 00:15:47.854: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 20.029940508s Mar 2 00:15:49.857: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 22.032855476s Mar 2 00:15:51.859: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 24.035396777s Mar 2 00:15:53.862: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 26.037536235s Mar 2 00:15:55.865: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 28.040506861s Mar 2 00:15:57.867: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 30.043274101s Mar 2 00:15:59.870: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 32.04617159s Mar 2 00:16:01.873: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 34.048945725s Mar 2 00:16:03.875: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 36.051001338s Mar 2 00:16:05.878: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 38.054011713s Mar 2 00:16:07.881: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 40.05668511s Mar 2 00:16:09.884: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 42.059512238s Mar 2 00:16:11.886: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 44.06151391s Mar 2 00:16:13.888: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 46.063815462s Mar 2 00:16:15.892: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 48.068032971s Mar 2 00:16:17.895: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 50.071045593s Mar 2 00:16:19.898: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 52.074099834s Mar 2 00:16:21.900: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 54.076460991s Mar 2 00:16:23.905: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 56.080538846s Mar 2 00:16:25.909: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 58.085105398s Mar 2 00:16:27.912: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.088150333s Mar 2 00:16:29.917: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m2.093321633s STEP: Saw pod success Mar 2 00:16:29.917: INFO: Pod "downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285" satisfied condition "Succeeded or Failed" Mar 2 00:16:29.919: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285 container client-container: STEP: delete the pod Mar 2 00:16:29.935: INFO: Waiting for pod downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285 to disappear Mar 2 00:16:29.940: INFO: Pod downwardapi-volume-0527d7b8-28ad-4e5a-aaea-d23b051c0285 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:29.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8087" for this suite. • [SLOW TEST:62.151 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.893: INFO: >>> kubeConfig: /tmp/kubeconfig-70536112 STEP: Building a namespace api object, basename deployment W0302 00:15:02.616164 197 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.616: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:15:02.618: INFO: Creating deployment "webserver-deployment" Mar 2 00:15:02.635: INFO: Waiting for observed generation 1 Mar 2 00:15:04.664: INFO: Waiting for all required pods to come up Mar 2 00:15:04.667: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 2 00:16:26.682: INFO: Waiting for deployment "webserver-deployment" to complete Mar 2 00:16:26.687: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 2 00:16:26.692: INFO: Updating deployment webserver-deployment Mar 2 00:16:26.692: INFO: Waiting for observed generation 2 Mar 2 00:16:28.700: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 2 00:16:28.702: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 2 00:16:28.704: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 2 00:16:28.710: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 2 00:16:28.710: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 2 00:16:28.712: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 2 00:16:28.715: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 2 00:16:28.715: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 2 00:16:28.720: INFO: Updating deployment webserver-deployment Mar 2 00:16:28.720: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 2 00:16:28.724: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 2 00:16:28.727: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Mar 2 00:16:30.735: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5316 18acef12-6983-461a-a5d1-08f0e3c26fa8 5722 3 2023-03-02 00:15:02 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00420d428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-03-02 00:16:28 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2023-03-02 00:16:29 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 2 00:16:30.738: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-5316 255254a9-d3f0-4b66-8855-3e2a488b7abb 5721 3 2023-03-02 00:16:26 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 18acef12-6983-461a-a5d1-08f0e3c26fa8 0xc00432c5a7 0xc00432c5a8}] [] [{k3s Update apps/v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18acef12-6983-461a-a5d1-08f0e3c26fa8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00432c658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:16:30.738: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 2 00:16:30.738: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-5316 0495b83e-7272-4dd2-bb28-47c8ffaf694d 5676 3 2023-03-02 00:15:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 18acef12-6983-461a-a5d1-08f0e3c26fa8 0xc00432c4a7 0xc00432c4a8}] [] [{k3s Update apps/v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"18acef12-6983-461a-a5d1-08f0e3c26fa8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:16:10 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00432c548 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:16:30.743: INFO: Pod "webserver-deployment-5d9fdcc779-tnvvs" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-tnvvs webserver-deployment-5d9fdcc779- deployment-5316 a76da61e-d47c-4bb7-8a61-0c7752f0b4f2 4209 0 2023-03-02 00:15:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc00432cb10 0xc00432cb11}] [] [{k3s Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4v69g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4v69g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.44,StartTime:2023-03-02 00:15:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:15:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://145e865caf3b1d3e8610d519dc3a66f058623215527318ea55b10057ab5c52f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.743: INFO: Pod "webserver-deployment-5d9fdcc779-kp2z2" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-kp2z2 webserver-deployment-5d9fdcc779- deployment-5316 1c452aed-0101-4b7c-adb3-741d3f53a86a 4292 0 2023-03-02 00:15:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc00432cce0 0xc00432cce1}] [] [{k3s Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2lp89,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2lp89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.41,StartTime:2023-03-02 00:15:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:15:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://201d355b16ebce9a224aded6a1b82a9a184ccc753c245318ff2f039fc51873fc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.743: INFO: Pod "webserver-deployment-5d9fdcc779-xp9l5" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-xp9l5 webserver-deployment-5d9fdcc779- deployment-5316 da9987ca-27c1-4f3f-9435-4b45fff0bf58 4376 0 2023-03-02 00:15:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc00432ceb0 0xc00432ceb1}] [] [{k3s Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qknzk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qknzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.43,StartTime:2023-03-02 00:15:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:15:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0c9a2524e28812cbbe08b0515ad6456d0610c64d7104cc2db33f969f1441fab6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.743: INFO: Pod "webserver-deployment-5d9fdcc779-p9ddt" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-p9ddt webserver-deployment-5d9fdcc779- deployment-5316 be279d61-a3e7-486a-8434-6c50e2fe9ff0 4496 0 2023-03-02 00:15:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc00432d080 0xc00432d081}] [] [{k3s Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.1.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wwjp6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wwjp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.42.1.57,StartTime:2023-03-02 00:15:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:15:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://f99f842a157a43bc4ed892966686c3c36b4a81c2e10b4312df4443983ca8aa66,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.1.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.743: INFO: Pod "webserver-deployment-5d9fdcc779-zsgqs" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-zsgqs webserver-deployment-5d9fdcc779- deployment-5316 b8ff7964-c507-4078-b174-d42563b48f3b 4500 0 2023-03-02 00:15:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc00432d250 0xc00432d251}] [] [{k3s Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-plwtt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plwtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.39,StartTime:2023-03-02 00:15:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:15:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://a5f9ea59e2f7e774fdc5f96fc9264f15300ebdd779a5bf8a4bae9a64949e48a9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.743: INFO: Pod "webserver-deployment-5d9fdcc779-h8w97" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-h8w97 webserver-deployment-5d9fdcc779- deployment-5316 0c70d6df-4fa5-4f9c-adef-384436cb98be 4959 0 2023-03-02 00:15:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc00432d420 0xc00432d421}] [] [{k3s Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.1.33\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hppx9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hppx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.42.1.33,StartTime:2023-03-02 00:15:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:15:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://c95f3fed8c84569928c0543e68a184eec31f5883a0ff787fa38f044fef1faa09,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.1.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.743: INFO: Pod "webserver-deployment-5d9fdcc779-ftxm8" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-ftxm8 webserver-deployment-5d9fdcc779- deployment-5316 49c1026c-7d37-4e92-afda-68c216ac2fa3 5257 0 2023-03-02 00:15:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc00432d5f0 0xc00432d5f1}] [] [{k3s Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.46\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-svprt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svprt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.46,StartTime:2023-03-02 00:15:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:15:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://8b0dbdfd9a1c7f0ab7451e96201f73c78e094d2a6b4c8f141d2d2a43e0b572f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-5d9fdcc779-wqd2d" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-wqd2d webserver-deployment-5d9fdcc779- deployment-5316 50b772f7-ba5a-41b0-aaed-087c27b641ba 5268 0 2023-03-02 00:15:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc00432d7c0 0xc00432d7c1}] [] [{k3s Update v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.1.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4bzs5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4bzs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:15:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.42.1.61,StartTime:2023-03-02 00:15:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:15:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://9e81e6f52a23715159e4affb26aaeb8a6ca2a9c09c748023479dbf2411f5696a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.1.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-566f96c878-8b9mc" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-8b9mc webserver-deployment-566f96c878- deployment-5316 11bafd4e-8324-4797-b77e-4bb695b71f99 5316 0 2023-03-02 00:16:26 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc00432d990 0xc00432d991}] [] [{k3s Update v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jdwwk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jdwwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-566f96c878-96w6c" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-96w6c webserver-deployment-566f96c878- deployment-5316 4366895e-d681-460b-a84e-2811f063edb7 5317 0 2023-03-02 00:16:26 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc00432daf0 0xc00432daf1}] [] [{k3s Update v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tmzv7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmzv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-566f96c878-7tzhm" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-7tzhm webserver-deployment-566f96c878- deployment-5316 583b83e7-c29a-4448-841e-f2ac5fa4b155 5342 0 2023-03-02 00:16:26 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc00432dc50 0xc00432dc51}] [] [{k3s Update v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2sszf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2sszf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-566f96c878-clwm8" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-clwm8 webserver-deployment-566f96c878- deployment-5316 327ca2e2-176b-4a1a-b0e6-fbf262866d5d 5345 0 2023-03-02 00:16:26 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc00432ddb0 0xc00432ddb1}] [] [{k3s Update v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ntlm9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntlm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-566f96c878-t484t" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-t484t webserver-deployment-566f96c878- deployment-5316 f42aa874-7020-4efe-b54c-02f2052f535b 5588 0 2023-03-02 00:16:26 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc00432df10 0xc00432df11}] [] [{k3s Update v1 2023-03-02 00:16:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sn76v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sn76v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2023-03-02 00:16:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-5d9fdcc779-6t2pw" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-6t2pw webserver-deployment-5d9fdcc779- deployment-5316 17ef28d4-3886-44c1-947b-0ca8eb8bd0b5 5600 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c80e0 0xc0044c80e1}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kfzst,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kfzst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-5d9fdcc779-x25qn" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-x25qn webserver-deployment-5d9fdcc779- deployment-5316 fca12537-f1c7-4653-9c27-faff022b57aa 5610 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c8230 0xc0044c8231}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wnzx4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wnzx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-5d9fdcc779-8trl7" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-8trl7 webserver-deployment-5d9fdcc779- deployment-5316 7378049e-06a7-464b-80c6-53ffd0cdfb15 5611 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c8380 0xc0044c8381}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xb5d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xb5d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-566f96c878-p84rn" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-p84rn webserver-deployment-566f96c878- deployment-5316 a93bf061-e767-4198-9185-32e332b9c39f 5615 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc0044c84d0 0xc0044c84d1}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m5s6l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5s6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-5d9fdcc779-qxjmj" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-qxjmj webserver-deployment-5d9fdcc779- deployment-5316 a357155c-a0ed-4ce7-9cbb-e2893d335397 5625 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c8630 0xc0044c8631}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p87jm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p87jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-5d9fdcc779-48nrb" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-48nrb webserver-deployment-5d9fdcc779- deployment-5316 4b1214bb-cc43-4000-89b0-8373b2be82c2 5626 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c8780 0xc0044c8781}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4w8sp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4w8sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.744: INFO: Pod "webserver-deployment-5d9fdcc779-wn7s2" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-wn7s2 webserver-deployment-5d9fdcc779- deployment-5316 10961fd5-8727-491a-8d87-a1a813e529db 5627 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c88d0 0xc0044c88d1}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6dx26,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dx26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.745: INFO: Pod "webserver-deployment-566f96c878-pq987" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-pq987 webserver-deployment-566f96c878- deployment-5316 c8bb2693-5063-45e6-8891-9fe155fc3185 5629 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc0044c8a20 0xc0044c8a21}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fpm47,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fpm47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.745: INFO: Pod "webserver-deployment-5d9fdcc779-zxqnc" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-zxqnc webserver-deployment-5d9fdcc779- deployment-5316 a7db1ec7-9d90-43f9-ac90-f374b4516d00 5631 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c8b80 0xc0044c8b81}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vbjt7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vbjt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.745: INFO: Pod "webserver-deployment-5d9fdcc779-77nww" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-77nww webserver-deployment-5d9fdcc779- deployment-5316 f1c11de4-f366-4720-8012-d2af976766b1 5637 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c8cd0 0xc0044c8cd1}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cpmkp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cpmkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.746: INFO: Pod "webserver-deployment-566f96c878-fdj6r" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-fdj6r webserver-deployment-566f96c878- deployment-5316 c364b78e-bf6f-4ec3-a3d8-113b70226fb3 5638 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc0044c8e20 0xc0044c8e21}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-khslv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-khslv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.746: INFO: Pod "webserver-deployment-5d9fdcc779-4cscr" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-4cscr webserver-deployment-5d9fdcc779- deployment-5316 c9c11382-ffcb-485c-9d30-9d14e03afbcf 5644 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c8f80 0xc0044c8f81}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-grlsn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grlsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.746: INFO: Pod "webserver-deployment-5d9fdcc779-kgwkj" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-kgwkj webserver-deployment-5d9fdcc779- deployment-5316 e4397f8d-83ac-4c9c-a41d-7d1770f3c591 5645 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c90d0 0xc0044c90d1}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xxrsw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xxrsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.747: INFO: Pod "webserver-deployment-566f96c878-shs89" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-shs89 webserver-deployment-566f96c878- deployment-5316 d065779c-588a-41ad-81ad-964b5e55b1f3 5648 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc0044c9220 0xc0044c9221}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p5876,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5876,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.747: INFO: Pod "webserver-deployment-5d9fdcc779-wf4s9" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-wf4s9 webserver-deployment-5d9fdcc779- deployment-5316 4fe83e8a-0387-42bb-aa0a-7e317c91e527 5649 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c9380 0xc0044c9381}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8hbnl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hbnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.747: INFO: Pod "webserver-deployment-5d9fdcc779-vz82m" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-vz82m webserver-deployment-5d9fdcc779- deployment-5316 21ccdc19-849d-4059-ab96-4eb898bc15a3 5656 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 0495b83e-7272-4dd2-bb28-47c8ffaf694d 0xc0044c94d0 0xc0044c94d1}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0495b83e-7272-4dd2-bb28-47c8ffaf694d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2r4lc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2r4lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.747: INFO: Pod "webserver-deployment-566f96c878-qxdz6" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-qxdz6 webserver-deployment-566f96c878- deployment-5316 a98f0094-d0a3-4381-9c29-b28421a80331 5658 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc0044c9620 0xc0044c9621}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wrkws,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrkws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.747: INFO: Pod "webserver-deployment-566f96c878-g6pxd" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-g6pxd webserver-deployment-566f96c878- deployment-5316 97db7793-e87e-43ed-a077-08bbf0cf4259 5659 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc0044c9780 0xc0044c9781}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vp6gl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vp6gl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.747: INFO: Pod "webserver-deployment-566f96c878-d88cg" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-d88cg webserver-deployment-566f96c878- deployment-5316 a311b938-9935-4ae6-8b77-d20a2d2b0cae 5663 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc0044c98e0 0xc0044c98e1}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9cmv7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cmv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:16:30.747: INFO: Pod "webserver-deployment-566f96c878-g75dj" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-g75dj webserver-deployment-566f96c878- deployment-5316 4a7780ce-05f0-4f55-84f1-9e6b5416ed61 5672 0 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 255254a9-d3f0-4b66-8855-3e2a488b7abb 0xc0044c9a40 0xc0044c9a41}] [] [{k3s Update v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255254a9-d3f0-4b66-8855-3e2a488b7abb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jzg4m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzg4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:30.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5316" for this suite. • [SLOW TEST:88.861 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.761: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 STEP: Building a namespace api object, basename svc-latency W0302 00:15:02.071710 350 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.071: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:15:02.074: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3253 I0302 00:15:02.097600 350 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3253, replica count: 1 I0302 00:15:03.148525 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:04.148782 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:05.149886 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:06.150903 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:07.150985 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:08.151082 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:09.151313 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:10.151404 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:11.151484 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:12.151555 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:13.151636 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:14.151838 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:15.151928 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:16.158948 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:17.159066 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:18.159435 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:19.160368 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:20.160771 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:21.161033 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:22.161127 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:23.161241 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:24.161377 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:25.161470 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:26.161560 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:27.161645 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:28.161738 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:29.161893 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:30.162904 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:31.163004 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:32.163115 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:33.164077 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:34.164378 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:35.164474 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:36.164576 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:37.164662 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:38.164750 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:39.165049 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:40.165137 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:41.165243 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:42.165323 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:43.165428 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:44.165651 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:45.165734 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:46.165843 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:47.166900 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:48.166986 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:49.167743 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:50.167833 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:51.167937 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:52.168029 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:53.168135 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:54.168283 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:55.168376 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:56.168461 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:57.168532 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:58.168609 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:59.168910 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:00.168997 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:01.169116 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:02.169183 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:03.169712 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:04.169846 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:05.170949 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:06.171049 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:07.171150 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:08.171245 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:09.171542 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:10.171625 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:11.171721 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:12.171809 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:13.171922 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:14.172157 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:15.172271 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:16.172701 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:17.172919 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:18.173113 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:19.173357 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:20.173442 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:21.173550 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:22.173649 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:23.174690 350 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:16:23.291: INFO: Created: latency-svc-24hr7 Mar 2 00:16:23.295: INFO: Got endpoints: latency-svc-24hr7 [19.305492ms] Mar 2 00:16:23.301: INFO: Created: latency-svc-qnwjk Mar 2 00:16:23.307: INFO: Created: latency-svc-8f9fk Mar 2 00:16:23.308: INFO: Got endpoints: latency-svc-qnwjk [13.227896ms] Mar 2 00:16:23.313: INFO: Got endpoints: latency-svc-8f9fk [17.817598ms] Mar 2 00:16:23.315: INFO: Created: latency-svc-56xbt Mar 2 00:16:23.323: INFO: Created: latency-svc-p84d5 Mar 2 00:16:23.324: INFO: Got endpoints: latency-svc-56xbt [29.47366ms] Mar 2 00:16:23.328: INFO: Got endpoints: latency-svc-p84d5 [32.904382ms] Mar 2 00:16:23.330: INFO: Created: latency-svc-zpxpm Mar 2 00:16:23.335: INFO: Got endpoints: latency-svc-zpxpm [40.713434ms] Mar 2 00:16:23.336: INFO: Created: latency-svc-gdxzs Mar 2 00:16:23.341: INFO: Got endpoints: latency-svc-gdxzs [46.285249ms] Mar 2 00:16:23.343: INFO: Created: latency-svc-zhrdv Mar 2 00:16:23.349: INFO: Got endpoints: latency-svc-zhrdv [54.20454ms] Mar 2 00:16:23.351: INFO: Created: latency-svc-bd9nq Mar 2 00:16:23.358: INFO: Got endpoints: latency-svc-bd9nq [63.024321ms] Mar 2 00:16:23.358: INFO: Created: latency-svc-7m6ln Mar 2 00:16:23.364: INFO: Got endpoints: latency-svc-7m6ln [68.812897ms] Mar 2 00:16:23.366: INFO: Created: latency-svc-hz92t Mar 2 00:16:23.371: INFO: Got endpoints: latency-svc-hz92t [76.547647ms] Mar 2 00:16:23.373: INFO: Created: latency-svc-jjbrc Mar 2 00:16:23.378: INFO: Got endpoints: latency-svc-jjbrc [83.012498ms] Mar 2 00:16:23.379: INFO: Created: latency-svc-vkkcr Mar 2 00:16:23.385: INFO: Got endpoints: latency-svc-vkkcr [89.833644ms] Mar 2 00:16:23.386: INFO: Created: latency-svc-dpmfm Mar 2 00:16:23.393: INFO: Got endpoints: latency-svc-dpmfm [98.484533ms] Mar 2 00:16:23.394: INFO: Created: latency-svc-vpcqh Mar 2 00:16:23.400: INFO: Got endpoints: latency-svc-vpcqh [105.498496ms] Mar 2 00:16:23.401: INFO: Created: latency-svc-hnvq8 Mar 2 00:16:23.407: INFO: Got endpoints: latency-svc-hnvq8 [111.984456ms] Mar 2 00:16:23.408: INFO: Created: latency-svc-sjql7 Mar 2 00:16:23.416: INFO: Created: latency-svc-bgzt6 Mar 2 00:16:23.416: INFO: Got endpoints: latency-svc-sjql7 [108.462081ms] Mar 2 00:16:23.421: INFO: Got endpoints: latency-svc-bgzt6 [108.518398ms] Mar 2 00:16:23.422: INFO: Created: latency-svc-b6rrp Mar 2 00:16:23.428: INFO: Got endpoints: latency-svc-b6rrp [104.066348ms] Mar 2 00:16:23.428: INFO: Created: latency-svc-kp6vw Mar 2 00:16:23.433: INFO: Got endpoints: latency-svc-kp6vw [105.438933ms] Mar 2 00:16:23.435: INFO: Created: latency-svc-9vzct Mar 2 00:16:23.440: INFO: Got endpoints: latency-svc-9vzct [104.417154ms] Mar 2 00:16:23.441: INFO: Created: latency-svc-679lf Mar 2 00:16:23.445: INFO: Got endpoints: latency-svc-679lf [103.851591ms] Mar 2 00:16:23.447: INFO: Created: latency-svc-fhchp Mar 2 00:16:23.453: INFO: Got endpoints: latency-svc-fhchp [104.533124ms] Mar 2 00:16:23.454: INFO: Created: latency-svc-tt8vb Mar 2 00:16:23.460: INFO: Created: latency-svc-t8cx5 Mar 2 00:16:23.460: INFO: Got endpoints: latency-svc-tt8vb [102.528408ms] Mar 2 00:16:23.466: INFO: Got endpoints: latency-svc-t8cx5 [102.583883ms] Mar 2 00:16:23.467: INFO: Created: latency-svc-zbdfm Mar 2 00:16:23.472: INFO: Got endpoints: latency-svc-zbdfm [100.96458ms] Mar 2 00:16:23.475: INFO: Created: latency-svc-tmdmh Mar 2 00:16:23.478: INFO: Got endpoints: latency-svc-tmdmh [100.222081ms] Mar 2 00:16:23.481: INFO: Created: latency-svc-87q7h Mar 2 00:16:23.484: INFO: Got endpoints: latency-svc-87q7h [98.868152ms] Mar 2 00:16:23.485: INFO: Created: latency-svc-j7snj Mar 2 00:16:23.489: INFO: Got endpoints: latency-svc-j7snj [95.5323ms] Mar 2 00:16:23.491: INFO: Created: latency-svc-lqswl Mar 2 00:16:23.496: INFO: Got endpoints: latency-svc-lqswl [95.545675ms] Mar 2 00:16:23.498: INFO: Created: latency-svc-5qc9k Mar 2 00:16:23.502: INFO: Got endpoints: latency-svc-5qc9k [95.177988ms] Mar 2 00:16:23.504: INFO: Created: latency-svc-vg2cq Mar 2 00:16:23.510: INFO: Got endpoints: latency-svc-vg2cq [93.174766ms] Mar 2 00:16:23.510: INFO: Created: latency-svc-bxzbk Mar 2 00:16:23.521: INFO: Got endpoints: latency-svc-bxzbk [100.245877ms] Mar 2 00:16:23.521: INFO: Created: latency-svc-f6rtp Mar 2 00:16:23.537: INFO: Got endpoints: latency-svc-f6rtp [108.592759ms] Mar 2 00:16:23.539: INFO: Created: latency-svc-lrpqj Mar 2 00:16:23.546: INFO: Created: latency-svc-ccckd Mar 2 00:16:23.554: INFO: Got endpoints: latency-svc-lrpqj [120.477937ms] Mar 2 00:16:23.558: INFO: Created: latency-svc-6zgs6 Mar 2 00:16:23.561: INFO: Created: latency-svc-hrjcx Mar 2 00:16:23.565: INFO: Created: latency-svc-pz89n Mar 2 00:16:23.569: INFO: Created: latency-svc-l46m5 Mar 2 00:16:23.573: INFO: Created: latency-svc-7jl44 Mar 2 00:16:23.577: INFO: Created: latency-svc-5k7x5 Mar 2 00:16:23.581: INFO: Created: latency-svc-gwpnp Mar 2 00:16:23.584: INFO: Created: latency-svc-cmp2b Mar 2 00:16:23.589: INFO: Created: latency-svc-h9rc6 Mar 2 00:16:23.594: INFO: Created: latency-svc-k2w64 Mar 2 00:16:23.601: INFO: Got endpoints: latency-svc-ccckd [161.136257ms] Mar 2 00:16:23.603: INFO: Created: latency-svc-wgdw9 Mar 2 00:16:23.607: INFO: Created: latency-svc-9bbx7 Mar 2 00:16:23.611: INFO: Created: latency-svc-4r7p7 Mar 2 00:16:23.615: INFO: Created: latency-svc-g8phr Mar 2 00:16:23.625: INFO: Created: latency-svc-7vkl8 Mar 2 00:16:23.646: INFO: Got endpoints: latency-svc-6zgs6 [200.997243ms] Mar 2 00:16:23.652: INFO: Created: latency-svc-7nwj4 Mar 2 00:16:23.696: INFO: Got endpoints: latency-svc-hrjcx [242.618159ms] Mar 2 00:16:23.703: INFO: Created: latency-svc-kx8bp Mar 2 00:16:23.745: INFO: Got endpoints: latency-svc-pz89n [284.045469ms] Mar 2 00:16:23.753: INFO: Created: latency-svc-gq7mf Mar 2 00:16:23.795: INFO: Got endpoints: latency-svc-l46m5 [328.849847ms] Mar 2 00:16:23.801: INFO: Created: latency-svc-kr7dh Mar 2 00:16:23.845: INFO: Got endpoints: latency-svc-7jl44 [371.720145ms] Mar 2 00:16:23.850: INFO: Created: latency-svc-8bqmt Mar 2 00:16:23.897: INFO: Got endpoints: latency-svc-5k7x5 [418.991125ms] Mar 2 00:16:23.903: INFO: Created: latency-svc-29vgt Mar 2 00:16:23.945: INFO: Got endpoints: latency-svc-gwpnp [461.406358ms] Mar 2 00:16:23.956: INFO: Created: latency-svc-pchhb Mar 2 00:16:23.996: INFO: Got endpoints: latency-svc-cmp2b [506.651664ms] Mar 2 00:16:24.002: INFO: Created: latency-svc-c4k5g Mar 2 00:16:24.046: INFO: Got endpoints: latency-svc-h9rc6 [549.715178ms] Mar 2 00:16:24.052: INFO: Created: latency-svc-ktj79 Mar 2 00:16:24.095: INFO: Got endpoints: latency-svc-k2w64 [592.369968ms] Mar 2 00:16:24.100: INFO: Created: latency-svc-fwfvt Mar 2 00:16:24.146: INFO: Got endpoints: latency-svc-wgdw9 [636.651424ms] Mar 2 00:16:24.152: INFO: Created: latency-svc-gmnxb Mar 2 00:16:24.196: INFO: Got endpoints: latency-svc-9bbx7 [674.329627ms] Mar 2 00:16:24.201: INFO: Created: latency-svc-gdljt Mar 2 00:16:24.245: INFO: Got endpoints: latency-svc-4r7p7 [707.668716ms] Mar 2 00:16:24.250: INFO: Created: latency-svc-9dfxf Mar 2 00:16:24.296: INFO: Got endpoints: latency-svc-g8phr [741.987161ms] Mar 2 00:16:24.301: INFO: Created: latency-svc-52wp8 Mar 2 00:16:24.345: INFO: Got endpoints: latency-svc-7vkl8 [744.372078ms] Mar 2 00:16:24.351: INFO: Created: latency-svc-kpdtb Mar 2 00:16:24.396: INFO: Got endpoints: latency-svc-7nwj4 [749.871336ms] Mar 2 00:16:24.402: INFO: Created: latency-svc-zsmgj Mar 2 00:16:24.445: INFO: Got endpoints: latency-svc-kx8bp [749.317014ms] Mar 2 00:16:24.451: INFO: Created: latency-svc-ftgjh Mar 2 00:16:24.495: INFO: Got endpoints: latency-svc-gq7mf [750.418856ms] Mar 2 00:16:24.501: INFO: Created: latency-svc-wjb9v Mar 2 00:16:24.545: INFO: Got endpoints: latency-svc-kr7dh [749.590734ms] Mar 2 00:16:24.550: INFO: Created: latency-svc-5hpdn Mar 2 00:16:24.597: INFO: Got endpoints: latency-svc-8bqmt [752.262205ms] Mar 2 00:16:24.604: INFO: Created: latency-svc-mj2z7 Mar 2 00:16:24.645: INFO: Got endpoints: latency-svc-29vgt [747.75523ms] Mar 2 00:16:24.651: INFO: Created: latency-svc-tbsnp Mar 2 00:16:24.695: INFO: Got endpoints: latency-svc-pchhb [749.593269ms] Mar 2 00:16:24.701: INFO: Created: latency-svc-x4bmp Mar 2 00:16:24.746: INFO: Got endpoints: latency-svc-c4k5g [749.928946ms] Mar 2 00:16:24.751: INFO: Created: latency-svc-mnxrx Mar 2 00:16:24.796: INFO: Got endpoints: latency-svc-ktj79 [750.597304ms] Mar 2 00:16:24.803: INFO: Created: latency-svc-zpczk Mar 2 00:16:24.845: INFO: Got endpoints: latency-svc-fwfvt [750.556538ms] Mar 2 00:16:24.857: INFO: Created: latency-svc-gl4tf Mar 2 00:16:24.894: INFO: Got endpoints: latency-svc-gmnxb [747.919112ms] Mar 2 00:16:24.900: INFO: Created: latency-svc-6tr46 Mar 2 00:16:24.947: INFO: Got endpoints: latency-svc-gdljt [751.212502ms] Mar 2 00:16:24.953: INFO: Created: latency-svc-tjsfr Mar 2 00:16:24.997: INFO: Got endpoints: latency-svc-9dfxf [752.499876ms] Mar 2 00:16:25.004: INFO: Created: latency-svc-66tmr Mar 2 00:16:25.046: INFO: Got endpoints: latency-svc-52wp8 [749.972319ms] Mar 2 00:16:25.051: INFO: Created: latency-svc-55948 Mar 2 00:16:25.095: INFO: Got endpoints: latency-svc-kpdtb [749.74152ms] Mar 2 00:16:25.101: INFO: Created: latency-svc-dxgxd Mar 2 00:16:25.146: INFO: Got endpoints: latency-svc-zsmgj [750.344466ms] Mar 2 00:16:25.153: INFO: Created: latency-svc-5mhnf Mar 2 00:16:25.197: INFO: Got endpoints: latency-svc-ftgjh [751.322682ms] Mar 2 00:16:25.203: INFO: Created: latency-svc-vg256 Mar 2 00:16:25.245: INFO: Got endpoints: latency-svc-wjb9v [749.539768ms] Mar 2 00:16:25.251: INFO: Created: latency-svc-ll7wl Mar 2 00:16:25.295: INFO: Got endpoints: latency-svc-5hpdn [750.079012ms] Mar 2 00:16:25.302: INFO: Created: latency-svc-gs96c Mar 2 00:16:25.345: INFO: Got endpoints: latency-svc-mj2z7 [747.777904ms] Mar 2 00:16:25.351: INFO: Created: latency-svc-5qmgb Mar 2 00:16:25.396: INFO: Got endpoints: latency-svc-tbsnp [750.497507ms] Mar 2 00:16:25.402: INFO: Created: latency-svc-vp2vz Mar 2 00:16:25.445: INFO: Got endpoints: latency-svc-x4bmp [749.903559ms] Mar 2 00:16:25.451: INFO: Created: latency-svc-9ztpm Mar 2 00:16:25.496: INFO: Got endpoints: latency-svc-mnxrx [750.13057ms] Mar 2 00:16:25.501: INFO: Created: latency-svc-6q9h5 Mar 2 00:16:25.546: INFO: Got endpoints: latency-svc-zpczk [749.460429ms] Mar 2 00:16:25.552: INFO: Created: latency-svc-bhvnd Mar 2 00:16:25.596: INFO: Got endpoints: latency-svc-gl4tf [750.568572ms] Mar 2 00:16:25.603: INFO: Created: latency-svc-67cl9 Mar 2 00:16:25.648: INFO: Got endpoints: latency-svc-6tr46 [753.293203ms] Mar 2 00:16:25.659: INFO: Created: latency-svc-8hj99 Mar 2 00:16:25.696: INFO: Got endpoints: latency-svc-tjsfr [748.748007ms] Mar 2 00:16:25.709: INFO: Created: latency-svc-rmrz4 Mar 2 00:16:25.747: INFO: Got endpoints: latency-svc-66tmr [750.150709ms] Mar 2 00:16:25.768: INFO: Created: latency-svc-bmv4l Mar 2 00:16:25.809: INFO: Got endpoints: latency-svc-55948 [763.477534ms] Mar 2 00:16:25.839: INFO: Created: latency-svc-vjkkl Mar 2 00:16:25.855: INFO: Got endpoints: latency-svc-dxgxd [760.122495ms] Mar 2 00:16:25.871: INFO: Created: latency-svc-npdb4 Mar 2 00:16:25.898: INFO: Got endpoints: latency-svc-5mhnf [751.191014ms] Mar 2 00:16:25.909: INFO: Created: latency-svc-f8d6n Mar 2 00:16:25.950: INFO: Got endpoints: latency-svc-vg256 [753.482242ms] Mar 2 00:16:25.961: INFO: Created: latency-svc-hsq78 Mar 2 00:16:25.998: INFO: Got endpoints: latency-svc-ll7wl [752.957737ms] Mar 2 00:16:26.009: INFO: Created: latency-svc-fjgj4 Mar 2 00:16:26.045: INFO: Got endpoints: latency-svc-gs96c [750.017367ms] Mar 2 00:16:26.054: INFO: Created: latency-svc-fh9pq Mar 2 00:16:26.094: INFO: Got endpoints: latency-svc-5qmgb [749.696317ms] Mar 2 00:16:26.101: INFO: Created: latency-svc-q4wdx Mar 2 00:16:26.148: INFO: Got endpoints: latency-svc-vp2vz [752.800789ms] Mar 2 00:16:26.157: INFO: Created: latency-svc-jfbrn Mar 2 00:16:26.195: INFO: Got endpoints: latency-svc-9ztpm [749.959927ms] Mar 2 00:16:26.201: INFO: Created: latency-svc-52jsf Mar 2 00:16:26.246: INFO: Got endpoints: latency-svc-6q9h5 [750.020191ms] Mar 2 00:16:26.251: INFO: Created: latency-svc-bzg4b Mar 2 00:16:26.296: INFO: Got endpoints: latency-svc-bhvnd [749.973233ms] Mar 2 00:16:26.302: INFO: Created: latency-svc-bj97b Mar 2 00:16:26.346: INFO: Got endpoints: latency-svc-67cl9 [750.507597ms] Mar 2 00:16:26.359: INFO: Created: latency-svc-srjck Mar 2 00:16:26.395: INFO: Got endpoints: latency-svc-8hj99 [745.46191ms] Mar 2 00:16:26.402: INFO: Created: latency-svc-s4p4f Mar 2 00:16:26.446: INFO: Got endpoints: latency-svc-rmrz4 [750.482239ms] Mar 2 00:16:26.452: INFO: Created: latency-svc-gxtx9 Mar 2 00:16:26.495: INFO: Got endpoints: latency-svc-bmv4l [747.144584ms] Mar 2 00:16:26.500: INFO: Created: latency-svc-rtdn2 Mar 2 00:16:26.545: INFO: Got endpoints: latency-svc-vjkkl [735.986577ms] Mar 2 00:16:26.550: INFO: Created: latency-svc-ld9mh Mar 2 00:16:26.595: INFO: Got endpoints: latency-svc-npdb4 [739.390157ms] Mar 2 00:16:26.601: INFO: Created: latency-svc-mr8r7 Mar 2 00:16:26.647: INFO: Got endpoints: latency-svc-f8d6n [749.812328ms] Mar 2 00:16:26.656: INFO: Created: latency-svc-9kt4r Mar 2 00:16:26.699: INFO: Got endpoints: latency-svc-hsq78 [748.236517ms] Mar 2 00:16:26.712: INFO: Created: latency-svc-7zfbs Mar 2 00:16:26.773: INFO: Got endpoints: latency-svc-fjgj4 [775.2499ms] Mar 2 00:16:26.805: INFO: Got endpoints: latency-svc-fh9pq [760.082331ms] Mar 2 00:16:26.811: INFO: Created: latency-svc-tlxwj Mar 2 00:16:26.828: INFO: Created: latency-svc-j9pm5 Mar 2 00:16:26.853: INFO: Got endpoints: latency-svc-q4wdx [758.461646ms] Mar 2 00:16:26.870: INFO: Created: latency-svc-cd8sd Mar 2 00:16:26.896: INFO: Got endpoints: latency-svc-jfbrn [747.184582ms] Mar 2 00:16:26.909: INFO: Created: latency-svc-h8z7g Mar 2 00:16:26.955: INFO: Got endpoints: latency-svc-52jsf [759.814093ms] Mar 2 00:16:26.972: INFO: Created: latency-svc-lsmc2 Mar 2 00:16:27.003: INFO: Got endpoints: latency-svc-bzg4b [757.432893ms] Mar 2 00:16:27.022: INFO: Created: latency-svc-jtlnj Mar 2 00:16:27.051: INFO: Got endpoints: latency-svc-bj97b [754.864138ms] Mar 2 00:16:27.069: INFO: Created: latency-svc-ln2mh Mar 2 00:16:27.103: INFO: Got endpoints: latency-svc-srjck [756.887057ms] Mar 2 00:16:27.130: INFO: Created: latency-svc-l2kx9 Mar 2 00:16:27.156: INFO: Got endpoints: latency-svc-s4p4f [760.837545ms] Mar 2 00:16:27.172: INFO: Created: latency-svc-xv29g Mar 2 00:16:27.199: INFO: Got endpoints: latency-svc-gxtx9 [752.160647ms] Mar 2 00:16:27.215: INFO: Created: latency-svc-gn8dt Mar 2 00:16:27.250: INFO: Got endpoints: latency-svc-rtdn2 [755.671981ms] Mar 2 00:16:27.260: INFO: Created: latency-svc-z6mm8 Mar 2 00:16:27.296: INFO: Got endpoints: latency-svc-ld9mh [750.503181ms] Mar 2 00:16:27.310: INFO: Created: latency-svc-277zg Mar 2 00:16:27.349: INFO: Got endpoints: latency-svc-mr8r7 [754.396851ms] Mar 2 00:16:27.364: INFO: Created: latency-svc-w7ccv Mar 2 00:16:27.396: INFO: Got endpoints: latency-svc-9kt4r [748.799196ms] Mar 2 00:16:27.404: INFO: Created: latency-svc-bcwmj Mar 2 00:16:27.495: INFO: Got endpoints: latency-svc-7zfbs [796.247515ms] Mar 2 00:16:27.502: INFO: Created: latency-svc-rztb5 Mar 2 00:16:27.545: INFO: Got endpoints: latency-svc-tlxwj [772.570816ms] Mar 2 00:16:27.551: INFO: Created: latency-svc-pdz5w Mar 2 00:16:27.596: INFO: Got endpoints: latency-svc-j9pm5 [790.558627ms] Mar 2 00:16:27.601: INFO: Created: latency-svc-khlrn Mar 2 00:16:27.645: INFO: Got endpoints: latency-svc-cd8sd [791.844578ms] Mar 2 00:16:27.651: INFO: Created: latency-svc-rv7z8 Mar 2 00:16:27.695: INFO: Got endpoints: latency-svc-h8z7g [798.712604ms] Mar 2 00:16:27.701: INFO: Created: latency-svc-dvb7b Mar 2 00:16:27.745: INFO: Got endpoints: latency-svc-lsmc2 [790.635975ms] Mar 2 00:16:27.751: INFO: Created: latency-svc-xnftf Mar 2 00:16:27.796: INFO: Got endpoints: latency-svc-jtlnj [792.68336ms] Mar 2 00:16:27.803: INFO: Created: latency-svc-ql8kh Mar 2 00:16:27.846: INFO: Got endpoints: latency-svc-ln2mh [795.277905ms] Mar 2 00:16:27.852: INFO: Created: latency-svc-4kqs7 Mar 2 00:16:27.948: INFO: Got endpoints: latency-svc-l2kx9 [844.440376ms] Mar 2 00:16:27.954: INFO: Created: latency-svc-xmd5g Mar 2 00:16:27.995: INFO: Got endpoints: latency-svc-xv29g [838.811481ms] Mar 2 00:16:28.002: INFO: Created: latency-svc-rb96f Mar 2 00:16:28.046: INFO: Got endpoints: latency-svc-gn8dt [847.91476ms] Mar 2 00:16:28.053: INFO: Created: latency-svc-jqbcm Mar 2 00:16:28.096: INFO: Got endpoints: latency-svc-z6mm8 [845.357035ms] Mar 2 00:16:28.103: INFO: Created: latency-svc-lgpj4 Mar 2 00:16:28.150: INFO: Got endpoints: latency-svc-277zg [853.979261ms] Mar 2 00:16:28.159: INFO: Created: latency-svc-zvxmw Mar 2 00:16:28.196: INFO: Got endpoints: latency-svc-w7ccv [847.014021ms] Mar 2 00:16:28.208: INFO: Created: latency-svc-j2pvb Mar 2 00:16:28.247: INFO: Got endpoints: latency-svc-bcwmj [850.329113ms] Mar 2 00:16:28.272: INFO: Created: latency-svc-fxb6p Mar 2 00:16:28.295: INFO: Got endpoints: latency-svc-rztb5 [799.667126ms] Mar 2 00:16:28.301: INFO: Created: latency-svc-4rhn4 Mar 2 00:16:28.345: INFO: Got endpoints: latency-svc-pdz5w [799.486764ms] Mar 2 00:16:28.352: INFO: Created: latency-svc-vlmnz Mar 2 00:16:28.396: INFO: Got endpoints: latency-svc-khlrn [800.472946ms] Mar 2 00:16:28.404: INFO: Created: latency-svc-2gnpb Mar 2 00:16:28.446: INFO: Got endpoints: latency-svc-rv7z8 [800.900899ms] Mar 2 00:16:28.458: INFO: Created: latency-svc-z99dw Mar 2 00:16:28.495: INFO: Got endpoints: latency-svc-dvb7b [800.188246ms] Mar 2 00:16:28.500: INFO: Created: latency-svc-xbwzk Mar 2 00:16:28.544: INFO: Got endpoints: latency-svc-xnftf [798.820028ms] Mar 2 00:16:28.550: INFO: Created: latency-svc-zqthw Mar 2 00:16:28.595: INFO: Got endpoints: latency-svc-ql8kh [799.296834ms] Mar 2 00:16:28.601: INFO: Created: latency-svc-p8n4r Mar 2 00:16:28.645: INFO: Got endpoints: latency-svc-4kqs7 [798.779543ms] Mar 2 00:16:28.650: INFO: Created: latency-svc-gqr42 Mar 2 00:16:28.696: INFO: Got endpoints: latency-svc-xmd5g [748.14112ms] Mar 2 00:16:28.702: INFO: Created: latency-svc-qbswc Mar 2 00:16:28.768: INFO: Got endpoints: latency-svc-rb96f [772.741843ms] Mar 2 00:16:28.834: INFO: Got endpoints: latency-svc-jqbcm [787.034891ms] Mar 2 00:16:28.859: INFO: Created: latency-svc-zwzh5 Mar 2 00:16:28.870: INFO: Got endpoints: latency-svc-lgpj4 [774.259313ms] Mar 2 00:16:28.881: INFO: Created: latency-svc-67q6d Mar 2 00:16:28.897: INFO: Created: latency-svc-bbv4d Mar 2 00:16:28.900: INFO: Got endpoints: latency-svc-zvxmw [749.934053ms] Mar 2 00:16:28.907: INFO: Created: latency-svc-rdfm2 Mar 2 00:16:28.952: INFO: Got endpoints: latency-svc-j2pvb [755.404166ms] Mar 2 00:16:28.962: INFO: Created: latency-svc-26rzk Mar 2 00:16:29.045: INFO: Got endpoints: latency-svc-fxb6p [798.413278ms] Mar 2 00:16:29.051: INFO: Created: latency-svc-9jbgw Mar 2 00:16:29.096: INFO: Got endpoints: latency-svc-4rhn4 [801.010899ms] Mar 2 00:16:29.102: INFO: Created: latency-svc-xqjqf Mar 2 00:16:29.145: INFO: Got endpoints: latency-svc-vlmnz [800.018445ms] Mar 2 00:16:29.151: INFO: Created: latency-svc-qllh4 Mar 2 00:16:29.196: INFO: Got endpoints: latency-svc-2gnpb [800.047971ms] Mar 2 00:16:29.202: INFO: Created: latency-svc-zff6d Mar 2 00:16:29.245: INFO: Got endpoints: latency-svc-z99dw [798.832123ms] Mar 2 00:16:29.251: INFO: Created: latency-svc-n7lq6 Mar 2 00:16:29.296: INFO: Got endpoints: latency-svc-xbwzk [800.609878ms] Mar 2 00:16:29.301: INFO: Created: latency-svc-k5dwx Mar 2 00:16:29.346: INFO: Got endpoints: latency-svc-zqthw [801.35356ms] Mar 2 00:16:29.351: INFO: Created: latency-svc-7mk8n Mar 2 00:16:29.396: INFO: Got endpoints: latency-svc-p8n4r [800.607903ms] Mar 2 00:16:29.402: INFO: Created: latency-svc-k5v9t Mar 2 00:16:29.445: INFO: Got endpoints: latency-svc-gqr42 [799.678831ms] Mar 2 00:16:29.451: INFO: Created: latency-svc-ptgq8 Mar 2 00:16:29.497: INFO: Got endpoints: latency-svc-qbswc [801.143641ms] Mar 2 00:16:29.503: INFO: Created: latency-svc-vnxq4 Mar 2 00:16:29.646: INFO: Got endpoints: latency-svc-zwzh5 [878.253397ms] Mar 2 00:16:29.652: INFO: Created: latency-svc-8tpnf Mar 2 00:16:29.696: INFO: Got endpoints: latency-svc-67q6d [862.256675ms] Mar 2 00:16:29.702: INFO: Created: latency-svc-52mg9 Mar 2 00:16:29.746: INFO: Got endpoints: latency-svc-bbv4d [875.845257ms] Mar 2 00:16:29.758: INFO: Created: latency-svc-99b2j Mar 2 00:16:29.796: INFO: Got endpoints: latency-svc-rdfm2 [896.540205ms] Mar 2 00:16:29.807: INFO: Created: latency-svc-s7z9g Mar 2 00:16:29.851: INFO: Got endpoints: latency-svc-26rzk [898.875669ms] Mar 2 00:16:29.860: INFO: Created: latency-svc-xzmw5 Mar 2 00:16:29.896: INFO: Got endpoints: latency-svc-9jbgw [851.154834ms] Mar 2 00:16:29.906: INFO: Created: latency-svc-pdw6m Mar 2 00:16:29.952: INFO: Got endpoints: latency-svc-xqjqf [855.942411ms] Mar 2 00:16:29.961: INFO: Created: latency-svc-zk8ln Mar 2 00:16:30.002: INFO: Got endpoints: latency-svc-qllh4 [856.928463ms] Mar 2 00:16:30.028: INFO: Created: latency-svc-m6jhp Mar 2 00:16:30.048: INFO: Got endpoints: latency-svc-zff6d [851.292906ms] Mar 2 00:16:30.059: INFO: Created: latency-svc-mjxgv Mar 2 00:16:30.097: INFO: Got endpoints: latency-svc-n7lq6 [852.482354ms] Mar 2 00:16:30.110: INFO: Created: latency-svc-5ncp9 Mar 2 00:16:30.146: INFO: Got endpoints: latency-svc-k5dwx [850.681115ms] Mar 2 00:16:30.157: INFO: Created: latency-svc-dsfpk Mar 2 00:16:30.195: INFO: Got endpoints: latency-svc-7mk8n [849.613609ms] Mar 2 00:16:30.203: INFO: Created: latency-svc-f69jc Mar 2 00:16:30.246: INFO: Got endpoints: latency-svc-k5v9t [850.03564ms] Mar 2 00:16:30.252: INFO: Created: latency-svc-8cnjx Mar 2 00:16:30.295: INFO: Got endpoints: latency-svc-ptgq8 [850.380525ms] Mar 2 00:16:30.302: INFO: Created: latency-svc-cwqmx Mar 2 00:16:30.346: INFO: Got endpoints: latency-svc-vnxq4 [848.685327ms] Mar 2 00:16:30.352: INFO: Created: latency-svc-vhtbl Mar 2 00:16:30.396: INFO: Got endpoints: latency-svc-8tpnf [749.393951ms] Mar 2 00:16:30.401: INFO: Created: latency-svc-9blfk Mar 2 00:16:30.447: INFO: Got endpoints: latency-svc-52mg9 [750.889891ms] Mar 2 00:16:30.453: INFO: Created: latency-svc-z9csl Mar 2 00:16:30.496: INFO: Got endpoints: latency-svc-99b2j [750.080034ms] Mar 2 00:16:30.502: INFO: Created: latency-svc-9dt9m Mar 2 00:16:30.546: INFO: Got endpoints: latency-svc-s7z9g [749.838846ms] Mar 2 00:16:30.552: INFO: Created: latency-svc-q7h2w Mar 2 00:16:30.595: INFO: Got endpoints: latency-svc-xzmw5 [744.207708ms] Mar 2 00:16:30.601: INFO: Created: latency-svc-zfc2b Mar 2 00:16:30.646: INFO: Got endpoints: latency-svc-pdw6m [749.476769ms] Mar 2 00:16:30.652: INFO: Created: latency-svc-zzwp2 Mar 2 00:16:30.697: INFO: Got endpoints: latency-svc-zk8ln [745.28343ms] Mar 2 00:16:30.704: INFO: Created: latency-svc-dxzdx Mar 2 00:16:30.747: INFO: Got endpoints: latency-svc-m6jhp [744.782198ms] Mar 2 00:16:30.754: INFO: Created: latency-svc-2hjw6 Mar 2 00:16:30.796: INFO: Got endpoints: latency-svc-mjxgv [748.231275ms] Mar 2 00:16:30.803: INFO: Created: latency-svc-wl6h5 Mar 2 00:16:30.846: INFO: Got endpoints: latency-svc-5ncp9 [747.840865ms] Mar 2 00:16:30.852: INFO: Created: latency-svc-kjmc2 Mar 2 00:16:30.896: INFO: Got endpoints: latency-svc-dsfpk [749.461029ms] Mar 2 00:16:30.907: INFO: Created: latency-svc-v4zcv Mar 2 00:16:30.946: INFO: Got endpoints: latency-svc-f69jc [751.155946ms] Mar 2 00:16:30.955: INFO: Created: latency-svc-fz2pq Mar 2 00:16:30.996: INFO: Got endpoints: latency-svc-8cnjx [750.073061ms] Mar 2 00:16:31.004: INFO: Created: latency-svc-kj86n Mar 2 00:16:31.047: INFO: Got endpoints: latency-svc-cwqmx [751.689088ms] Mar 2 00:16:31.054: INFO: Created: latency-svc-nsgjw Mar 2 00:16:31.096: INFO: Got endpoints: latency-svc-vhtbl [749.933987ms] Mar 2 00:16:31.103: INFO: Created: latency-svc-jjv8w Mar 2 00:16:31.147: INFO: Got endpoints: latency-svc-9blfk [751.223926ms] Mar 2 00:16:31.154: INFO: Created: latency-svc-qcrgw Mar 2 00:16:31.196: INFO: Got endpoints: latency-svc-z9csl [748.836355ms] Mar 2 00:16:31.203: INFO: Created: latency-svc-nwnj2 Mar 2 00:16:31.245: INFO: Got endpoints: latency-svc-9dt9m [748.68634ms] Mar 2 00:16:31.251: INFO: Created: latency-svc-xjbxs Mar 2 00:16:31.296: INFO: Got endpoints: latency-svc-q7h2w [749.804933ms] Mar 2 00:16:31.302: INFO: Created: latency-svc-8ftwr Mar 2 00:16:31.345: INFO: Got endpoints: latency-svc-zfc2b [750.667641ms] Mar 2 00:16:31.351: INFO: Created: latency-svc-6wlfw Mar 2 00:16:31.396: INFO: Got endpoints: latency-svc-zzwp2 [749.983191ms] Mar 2 00:16:31.446: INFO: Got endpoints: latency-svc-dxzdx [748.763216ms] Mar 2 00:16:31.495: INFO: Got endpoints: latency-svc-2hjw6 [748.513361ms] Mar 2 00:16:31.546: INFO: Got endpoints: latency-svc-wl6h5 [750.365227ms] Mar 2 00:16:31.596: INFO: Got endpoints: latency-svc-kjmc2 [750.206425ms] Mar 2 00:16:31.645: INFO: Got endpoints: latency-svc-v4zcv [749.152405ms] Mar 2 00:16:31.695: INFO: Got endpoints: latency-svc-fz2pq [748.076102ms] Mar 2 00:16:31.745: INFO: Got endpoints: latency-svc-kj86n [749.2547ms] Mar 2 00:16:31.796: INFO: Got endpoints: latency-svc-nsgjw [749.658637ms] Mar 2 00:16:31.845: INFO: Got endpoints: latency-svc-jjv8w [748.601029ms] Mar 2 00:16:31.895: INFO: Got endpoints: latency-svc-qcrgw [748.138221ms] Mar 2 00:16:31.946: INFO: Got endpoints: latency-svc-nwnj2 [750.736172ms] Mar 2 00:16:31.995: INFO: Got endpoints: latency-svc-xjbxs [750.082631ms] Mar 2 00:16:32.046: INFO: Got endpoints: latency-svc-8ftwr [749.974285ms] Mar 2 00:16:32.095: INFO: Got endpoints: latency-svc-6wlfw [749.104705ms] Mar 2 00:16:32.095: INFO: Latencies: [13.227896ms 17.817598ms 29.47366ms 32.904382ms 40.713434ms 46.285249ms 54.20454ms 63.024321ms 68.812897ms 76.547647ms 83.012498ms 89.833644ms 93.174766ms 95.177988ms 95.5323ms 95.545675ms 98.484533ms 98.868152ms 100.222081ms 100.245877ms 100.96458ms 102.528408ms 102.583883ms 103.851591ms 104.066348ms 104.417154ms 104.533124ms 105.438933ms 105.498496ms 108.462081ms 108.518398ms 108.592759ms 111.984456ms 120.477937ms 161.136257ms 200.997243ms 242.618159ms 284.045469ms 328.849847ms 371.720145ms 418.991125ms 461.406358ms 506.651664ms 549.715178ms 592.369968ms 636.651424ms 674.329627ms 707.668716ms 735.986577ms 739.390157ms 741.987161ms 744.207708ms 744.372078ms 744.782198ms 745.28343ms 745.46191ms 747.144584ms 747.184582ms 747.75523ms 747.777904ms 747.840865ms 747.919112ms 748.076102ms 748.138221ms 748.14112ms 748.231275ms 748.236517ms 748.513361ms 748.601029ms 748.68634ms 748.748007ms 748.763216ms 748.799196ms 748.836355ms 749.104705ms 749.152405ms 749.2547ms 749.317014ms 749.393951ms 749.460429ms 749.461029ms 749.476769ms 749.539768ms 749.590734ms 749.593269ms 749.658637ms 749.696317ms 749.74152ms 749.804933ms 749.812328ms 749.838846ms 749.871336ms 749.903559ms 749.928946ms 749.933987ms 749.934053ms 749.959927ms 749.972319ms 749.973233ms 749.974285ms 749.983191ms 750.017367ms 750.020191ms 750.073061ms 750.079012ms 750.080034ms 750.082631ms 750.13057ms 750.150709ms 750.206425ms 750.344466ms 750.365227ms 750.418856ms 750.482239ms 750.497507ms 750.503181ms 750.507597ms 750.556538ms 750.568572ms 750.597304ms 750.667641ms 750.736172ms 750.889891ms 751.155946ms 751.191014ms 751.212502ms 751.223926ms 751.322682ms 751.689088ms 752.160647ms 752.262205ms 752.499876ms 752.800789ms 752.957737ms 753.293203ms 753.482242ms 754.396851ms 754.864138ms 755.404166ms 755.671981ms 756.887057ms 757.432893ms 758.461646ms 759.814093ms 760.082331ms 760.122495ms 760.837545ms 763.477534ms 772.570816ms 772.741843ms 774.259313ms 775.2499ms 787.034891ms 790.558627ms 790.635975ms 791.844578ms 792.68336ms 795.277905ms 796.247515ms 798.413278ms 798.712604ms 798.779543ms 798.820028ms 798.832123ms 799.296834ms 799.486764ms 799.667126ms 799.678831ms 800.018445ms 800.047971ms 800.188246ms 800.472946ms 800.607903ms 800.609878ms 800.900899ms 801.010899ms 801.143641ms 801.35356ms 838.811481ms 844.440376ms 845.357035ms 847.014021ms 847.91476ms 848.685327ms 849.613609ms 850.03564ms 850.329113ms 850.380525ms 850.681115ms 851.154834ms 851.292906ms 852.482354ms 853.979261ms 855.942411ms 856.928463ms 862.256675ms 875.845257ms 878.253397ms 896.540205ms 898.875669ms] Mar 2 00:16:32.095: INFO: 50 %ile: 749.983191ms Mar 2 00:16:32.095: INFO: 90 %ile: 845.357035ms Mar 2 00:16:32.095: INFO: 99 %ile: 896.540205ms Mar 2 00:16:32.095: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:32.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3253" for this suite. • [SLOW TEST:90.341 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:30.209: INFO: >>> kubeConfig: /tmp/kubeconfig-3471101731 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:15:30.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278" in namespace "downward-api-3506" to be "Succeeded or Failed" Mar 2 00:15:30.225: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 1.805897ms Mar 2 00:15:32.228: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004610399s Mar 2 00:15:34.231: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007264616s Mar 2 00:15:36.234: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010584798s Mar 2 00:15:38.237: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014100442s Mar 2 00:15:40.240: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016451016s Mar 2 00:15:42.243: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 12.020143639s Mar 2 00:15:44.246: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 14.02300235s Mar 2 00:15:46.250: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 16.026774867s Mar 2 00:15:48.253: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 18.029725744s Mar 2 00:15:50.256: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 20.032581833s Mar 2 00:15:52.260: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 22.036388387s Mar 2 00:15:54.262: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 24.038552514s Mar 2 00:15:56.264: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 26.04102659s Mar 2 00:15:58.267: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 28.044052734s Mar 2 00:16:00.272: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 30.048297561s Mar 2 00:16:02.275: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 32.051831972s Mar 2 00:16:04.277: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 34.053972768s Mar 2 00:16:06.280: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 36.05691664s Mar 2 00:16:08.283: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 38.060179026s Mar 2 00:16:10.287: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 40.063426033s Mar 2 00:16:12.290: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 42.066338747s Mar 2 00:16:14.293: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 44.069828246s Mar 2 00:16:16.297: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 46.07382647s Mar 2 00:16:18.300: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 48.076319659s Mar 2 00:16:20.303: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 50.079980553s Mar 2 00:16:22.306: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 52.08311511s Mar 2 00:16:24.308: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 54.085057273s Mar 2 00:16:26.312: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 56.088795135s Mar 2 00:16:28.316: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 58.09226068s Mar 2 00:16:30.319: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.095625594s Mar 2 00:16:32.323: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m2.09975041s STEP: Saw pod success Mar 2 00:16:32.323: INFO: Pod "downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278" satisfied condition "Succeeded or Failed" Mar 2 00:16:32.325: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278 container client-container: STEP: delete the pod Mar 2 00:16:32.336: INFO: Waiting for pod downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278 to disappear Mar 2 00:16:32.340: INFO: Pod downwardapi-volume-d69a4a86-061e-4a81-a637-ec93bcb66278 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:32.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3506" for this suite. • [SLOW TEST:62.137 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:18.520: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test env composition Mar 2 00:16:18.541: INFO: Waiting up to 5m0s for pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e" in namespace "var-expansion-4249" to be "Succeeded or Failed" Mar 2 00:16:18.545: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739227ms Mar 2 00:16:20.548: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00681344s Mar 2 00:16:22.551: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010437317s Mar 2 00:16:24.555: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013800828s Mar 2 00:16:26.558: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016714958s Mar 2 00:16:28.560: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018999553s Mar 2 00:16:30.563: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022465579s Mar 2 00:16:32.567: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026230314s Mar 2 00:16:34.571: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.029792273s STEP: Saw pod success Mar 2 00:16:34.571: INFO: Pod "var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e" satisfied condition "Succeeded or Failed" Mar 2 00:16:34.573: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e container dapi-container: STEP: delete the pod Mar 2 00:16:34.584: INFO: Waiting for pod var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e to disappear Mar 2 00:16:34.587: INFO: Pod var-expansion-e7106922-0f9b-4228-b0a5-077fe0c0404e no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:34.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4249" for this suite. • [SLOW TEST:16.072 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":48,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.631: INFO: >>> kubeConfig: /tmp/kubeconfig-446130155 STEP: Building a namespace api object, basename dns W0302 00:15:01.671156 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.671: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4213 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4213;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4213 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4213;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4213.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4213.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4213.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4213.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4213.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4213.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4213.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4213.svc;check="$$(dig +notcp +noall +answer +search 215.246.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.246.215_udp@PTR;check="$$(dig +tcp +noall +answer +search 215.246.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.246.215_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4213 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4213;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4213 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4213;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4213.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4213.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4213.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4213.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4213.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4213.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4213.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4213.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4213.svc;check="$$(dig +notcp +noall +answer +search 215.246.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.246.215_udp@PTR;check="$$(dig +tcp +noall +answer +search 215.246.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.246.215_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 2 00:16:29.788: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.790: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.793: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.796: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.799: INFO: Unable to read wheezy_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.804: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.808: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4213.svc from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.811: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4213.svc from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.827: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.830: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.833: INFO: Unable to read jessie_udp@dns-test-service.dns-4213 from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213 from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.839: INFO: Unable to read jessie_udp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-4213.svc from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.849: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4213.svc from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.853: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4213.svc from pod dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98: the server could not find the requested resource (get pods dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98) Mar 2 00:16:29.867: INFO: Lookups using dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4213 wheezy_tcp@dns-test-service.dns-4213 wheezy_udp@dns-test-service.dns-4213.svc wheezy_tcp@dns-test-service.dns-4213.svc wheezy_udp@_http._tcp.dns-test-service.dns-4213.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4213.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4213 jessie_tcp@dns-test-service.dns-4213 jessie_udp@dns-test-service.dns-4213.svc jessie_tcp@dns-test-service.dns-4213.svc jessie_udp@_http._tcp.dns-test-service.dns-4213.svc jessie_tcp@_http._tcp.dns-test-service.dns-4213.svc] Mar 2 00:16:34.926: INFO: DNS probes using dns-4213/dns-test-f4a7d4ff-f4b9-4da8-8533-59a94a529c98 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:34.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4213" for this suite. • [SLOW TEST:93.355 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:20.555: INFO: >>> kubeConfig: /tmp/kubeconfig-1801756564 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name secret-test-8cfb826d-5c98-48ea-b95f-6e0551b26f9d STEP: Creating a pod to test consume secrets Mar 2 00:16:20.576: INFO: Waiting up to 5m0s for pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0" in namespace "secrets-4751" to be "Succeeded or Failed" Mar 2 00:16:20.578: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.790448ms Mar 2 00:16:22.581: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004864091s Mar 2 00:16:24.584: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008089772s Mar 2 00:16:26.587: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011282179s Mar 2 00:16:28.590: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014109424s Mar 2 00:16:30.592: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016402585s Mar 2 00:16:32.595: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.019360959s Mar 2 00:16:34.598: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.021789437s Mar 2 00:16:36.603: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.02702406s STEP: Saw pod success Mar 2 00:16:36.603: INFO: Pod "pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0" satisfied condition "Succeeded or Failed" Mar 2 00:16:36.609: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0 container secret-env-test: STEP: delete the pod Mar 2 00:16:36.630: INFO: Waiting for pod pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0 to disappear Mar 2 00:16:36.635: INFO: Pod pod-secrets-abc2a2c3-40f8-4436-8a6a-fc894d0163e0 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:36.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4751" for this suite. • [SLOW TEST:16.087 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":33,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:23.106: INFO: >>> kubeConfig: /tmp/kubeconfig-2945017463 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating projection with secret that has name projected-secret-test-ce1bdcbe-317c-4cf8-929d-8811de45868f STEP: Creating a pod to test consume secrets Mar 2 00:16:23.125: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd" in namespace "projected-6498" to be "Succeeded or Failed" Mar 2 00:16:23.127: INFO: Pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.630885ms Mar 2 00:16:25.130: INFO: Pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004499611s Mar 2 00:16:27.137: INFO: Pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011639053s Mar 2 00:16:29.140: INFO: Pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014245992s Mar 2 00:16:31.145: INFO: Pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01949674s Mar 2 00:16:33.148: INFO: Pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023123765s Mar 2 00:16:35.154: INFO: Pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028787306s Mar 2 00:16:37.158: INFO: Pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.03300867s STEP: Saw pod success Mar 2 00:16:37.158: INFO: Pod "pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd" satisfied condition "Succeeded or Failed" Mar 2 00:16:37.161: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd container projected-secret-volume-test: STEP: delete the pod Mar 2 00:16:37.188: INFO: Waiting for pod pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd to disappear Mar 2 00:16:37.193: INFO: Pod pod-projected-secrets-acadb642-f7fd-4c78-9c5f-6f3bf32dc3fd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:37.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6498" for this suite. • [SLOW TEST:14.096 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.759: INFO: >>> kubeConfig: /tmp/kubeconfig-2095741199 STEP: Building a namespace api object, basename webhook W0302 00:15:02.071970 270 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.072: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:15:02.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:15:04.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:06.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:08.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:10.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:12.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:14.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:16.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:18.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:20.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:22.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:24.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:26.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:28.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:30.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:32.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:34.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:36.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:38.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:40.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:42.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:44.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:46.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:48.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:50.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:52.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:54.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:56.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:15:58.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:00.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:02.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:04.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:06.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:08.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:10.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:12.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:14.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:16.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:18.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:20.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:22.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:24.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 15, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:16:27.668: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:37.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4524" for this suite. STEP: Destroying namespace "webhook-4524-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:96.048 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:20.805: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod Mar 2 00:16:20.816: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:38.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3828" for this suite. • [SLOW TEST:17.894 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":4,"skipped":91,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:24.975: INFO: >>> kubeConfig: /tmp/kubeconfig-1722668237 STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test substitution in container's args Mar 2 00:16:24.993: INFO: Waiting up to 5m0s for pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4" in namespace "var-expansion-6892" to be "Succeeded or Failed" Mar 2 00:16:24.997: INFO: Pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282649ms Mar 2 00:16:27.009: INFO: Pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015911942s Mar 2 00:16:29.012: INFO: Pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018983753s Mar 2 00:16:31.016: INFO: Pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022948739s Mar 2 00:16:33.020: INFO: Pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026787456s Mar 2 00:16:35.024: INFO: Pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031356789s Mar 2 00:16:37.028: INFO: Pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.03542424s Mar 2 00:16:39.033: INFO: Pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.03954911s STEP: Saw pod success Mar 2 00:16:39.033: INFO: Pod "var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4" satisfied condition "Succeeded or Failed" Mar 2 00:16:39.036: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4 container dapi-container: STEP: delete the pod Mar 2 00:16:39.069: INFO: Waiting for pod var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4 to disappear Mar 2 00:16:39.074: INFO: Pod var-expansion-6a453466-cead-4f3d-a912-ed1a043d8ac4 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:39.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6892" for this suite. • [SLOW TEST:14.110 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:06.610: INFO: >>> kubeConfig: /tmp/kubeconfig-1605033327 STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication Mar 2 00:16:06.923: INFO: role binding crd-conversion-webhook-auth-reader already exists STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 2 00:16:06.936: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 2 00:16:08.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:10.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:12.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:14.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:16.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:18.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:20.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:22.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:24.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:26.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:28.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:30.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:32.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:16:35.963: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:35.966: INFO: >>> kubeConfig: /tmp/kubeconfig-1605033327 STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:39.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-923" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:32.475 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":29,"failed":0} SS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:07.793: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 2 00:16:07.814: INFO: Waiting up to 5m0s for pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60" in namespace "emptydir-5585" to be "Succeeded or Failed" Mar 2 00:16:07.816: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 1.950432ms Mar 2 00:16:09.819: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005180125s Mar 2 00:16:11.822: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008220368s Mar 2 00:16:13.825: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011185679s Mar 2 00:16:15.828: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014167943s Mar 2 00:16:17.831: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 10.017584461s Mar 2 00:16:19.835: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 12.021300188s Mar 2 00:16:21.839: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 14.02513439s Mar 2 00:16:23.842: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 16.028074384s Mar 2 00:16:25.854: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 18.040117066s Mar 2 00:16:27.857: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 20.042870988s Mar 2 00:16:29.863: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 22.049287271s Mar 2 00:16:31.867: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 24.053339378s Mar 2 00:16:33.870: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 26.056508487s Mar 2 00:16:35.874: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 28.060424866s Mar 2 00:16:37.877: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Pending", Reason="", readiness=false. Elapsed: 30.063724845s Mar 2 00:16:39.883: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.06916759s STEP: Saw pod success Mar 2 00:16:39.883: INFO: Pod "pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60" satisfied condition "Succeeded or Failed" Mar 2 00:16:39.887: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60 container test-container: STEP: delete the pod Mar 2 00:16:39.911: INFO: Waiting for pod pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60 to disappear Mar 2 00:16:39.918: INFO: Pod pod-35bfb74c-cba7-424c-9ff9-c54bd3a09b60 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:39.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5585" for this suite. • [SLOW TEST:32.141 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":90,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:34.596: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:34.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 2 00:16:36.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1948352521 --namespace=crd-publish-openapi-5947 --namespace=crd-publish-openapi-5947 create -f -' Mar 2 00:16:36.951: INFO: stderr: "" Mar 2 00:16:36.951: INFO: stdout: "e2e-test-crd-publish-openapi-8147-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 2 00:16:36.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1948352521 --namespace=crd-publish-openapi-5947 --namespace=crd-publish-openapi-5947 delete e2e-test-crd-publish-openapi-8147-crds test-cr' Mar 2 00:16:36.997: INFO: stderr: "" Mar 2 00:16:36.997: INFO: stdout: "e2e-test-crd-publish-openapi-8147-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 2 00:16:36.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1948352521 --namespace=crd-publish-openapi-5947 --namespace=crd-publish-openapi-5947 apply -f -' Mar 2 00:16:37.628: INFO: stderr: "" Mar 2 00:16:37.628: INFO: stdout: "e2e-test-crd-publish-openapi-8147-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 2 00:16:37.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1948352521 --namespace=crd-publish-openapi-5947 --namespace=crd-publish-openapi-5947 delete e2e-test-crd-publish-openapi-8147-crds test-cr' Mar 2 00:16:37.676: INFO: stderr: "" Mar 2 00:16:37.676: INFO: stdout: "e2e-test-crd-publish-openapi-8147-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 2 00:16:37.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1948352521 --namespace=crd-publish-openapi-5947 explain e2e-test-crd-publish-openapi-8147-crds' Mar 2 00:16:37.780: INFO: stderr: "" Mar 2 00:16:37.780: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8147-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:40.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5947" for this suite. • [SLOW TEST:5.894 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":3,"skipped":53,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.834: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename deployment W0302 00:15:02.430192 209 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.430: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:15:02.433: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 2 00:15:02.494: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 2 00:15:07.497: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 2 00:16:11.504: INFO: Creating deployment "test-rolling-update-deployment" Mar 2 00:16:11.509: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 2 00:16:11.513: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 2 00:16:13.518: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 2 00:16:13.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:15.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:17.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:19.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:21.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:23.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:25.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:27.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:29.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:31.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:33.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:35.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:37.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:39.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 11, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-8656fc4b57\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:41.523: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Mar 2 00:16:41.530: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9669 c0b098b9-8619-4a3c-a48d-8687ca9ca450 7051 1 2023-03-02 00:16:11 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-03-02 00:16:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:16:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00182e488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-03-02 00:16:11 +0000 UTC,LastTransitionTime:2023-03-02 00:16:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-8656fc4b57" has successfully progressed.,LastUpdateTime:2023-03-02 00:16:39 +0000 UTC,LastTransitionTime:2023-03-02 00:16:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 2 00:16:41.532: INFO: New ReplicaSet "test-rolling-update-deployment-8656fc4b57" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-8656fc4b57 deployment-9669 a6e51338-be16-45bb-8f8c-3e71053695db 7036 1 2023-03-02 00:16:11 +0000 UTC map[name:sample-pod pod-template-hash:8656fc4b57] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c0b098b9-8619-4a3c-a48d-8687ca9ca450 0xc00182e837 0xc00182e838}] [] [{k3s Update apps/v1 2023-03-02 00:16:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b098b9-8619-4a3c-a48d-8687ca9ca450\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:16:39 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 8656fc4b57,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:8656fc4b57] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00182e8f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:16:41.532: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 2 00:16:41.532: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9669 fd1d9632-675a-4ec3-b8e7-dfc2ce330ca0 7048 2 2023-03-02 00:15:02 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c0b098b9-8619-4a3c-a48d-8687ca9ca450 0xc00182e93f 0xc00182e950}] [] [{e2e.test Update apps/v1 2023-03-02 00:15:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:16:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b098b9-8619-4a3c-a48d-8687ca9ca450\"}":{}}},"f:spec":{"f:replicas":{}}} } {k3s Update apps/v1 2023-03-02 00:16:39 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00182ea18 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:16:41.535: INFO: Pod "test-rolling-update-deployment-8656fc4b57-hqj4l" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-8656fc4b57-hqj4l test-rolling-update-deployment-8656fc4b57- deployment-9669 14d1c251-2aed-445a-9c29-a736f5f81562 7033 0 2023-03-02 00:16:11 +0000 UTC map[name:sample-pod pod-template-hash:8656fc4b57] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-8656fc4b57 a6e51338-be16-45bb-8f8c-3e71053695db 0xc0015fc377 0xc0015fc378}] [] [{k3s Update v1 2023-03-02 00:16:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6e51338-be16-45bb-8f8c-3e71053695db\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:16:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.108\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4cgk4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4cgk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.108,StartTime:2023-03-02 00:16:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:16:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://12153556007af681ffbace65ca31f04a6a8d83ef8b42830fcd5b62d7c8c5789a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:41.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9669" for this suite. • [SLOW TEST:99.708 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.671: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 STEP: Building a namespace api object, basename replicaset W0302 00:15:01.737670 203 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.737: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of ReplicaSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Create a ReplicaSet STEP: Verify that the required pods have come up Mar 2 00:15:01.780: INFO: Pod name sample-pod: Found 0 pods out of 3 Mar 2 00:15:06.783: INFO: Pod name sample-pod: Found 3 pods out of 3 STEP: ensuring each pod is running Mar 2 00:16:44.790: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} STEP: Listing all ReplicaSets STEP: DeleteCollection of the ReplicaSets STEP: After DeleteCollection verify that ReplicaSets have been deleted [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:44.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8117" for this suite. • [SLOW TEST:103.141 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should list and delete a collection of ReplicaSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:44.818: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Mar 2 00:16:44.847: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:44.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1592" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:12.529: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: getting the auto-created API token STEP: reading a file in the container Mar 2 00:16:45.063: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6009 pod-service-account-1007c0c2-0ac7-4aab-9b36-b8e293b7ef91 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 2 00:16:45.149: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6009 pod-service-account-1007c0c2-0ac7-4aab-9b36-b8e293b7ef91 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 2 00:16:45.219: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6009 pod-service-account-1007c0c2-0ac7-4aab-9b36-b8e293b7ef91 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:45.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6009" for this suite. • [SLOW TEST:32.807 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":2,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:17.214: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1539 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Mar 2 00:16:17.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-6231 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2' Mar 2 00:16:17.271: INFO: stderr: "" Mar 2 00:16:17.271: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543 Mar 2 00:16:17.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-6231 delete pods e2e-test-httpd-pod' Mar 2 00:16:45.691: INFO: stderr: "" Mar 2 00:16:45.691: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:45.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6231" for this suite. • [SLOW TEST:28.484 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1536 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":2,"skipped":48,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:39.942: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:46.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4663" for this suite. • [SLOW TEST:7.032 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":4,"skipped":99,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.798: INFO: >>> kubeConfig: /tmp/kubeconfig-2274336506 STEP: Building a namespace api object, basename containers W0302 00:15:02.316604 517 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.316: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:50.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9563" for this suite. • [SLOW TEST:108.576 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:06.302: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test override all Mar 2 00:16:06.319: INFO: Waiting up to 5m0s for pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a" in namespace "containers-9003" to be "Succeeded or Failed" Mar 2 00:16:06.321: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.991199ms Mar 2 00:16:08.324: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004601548s Mar 2 00:16:10.327: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00777165s Mar 2 00:16:12.329: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009776982s Mar 2 00:16:14.331: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011647608s Mar 2 00:16:16.334: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014478867s Mar 2 00:16:18.336: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.016800861s Mar 2 00:16:20.339: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.019998988s Mar 2 00:16:22.344: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.024819936s Mar 2 00:16:24.347: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.027828052s Mar 2 00:16:26.350: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.031064923s Mar 2 00:16:28.354: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.03434691s Mar 2 00:16:30.357: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.037477619s Mar 2 00:16:32.360: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.040438066s Mar 2 00:16:34.362: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.042762836s Mar 2 00:16:36.366: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.046371693s Mar 2 00:16:38.370: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.050630363s Mar 2 00:16:40.374: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.054401719s Mar 2 00:16:42.377: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.057816007s Mar 2 00:16:44.380: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.060362797s Mar 2 00:16:46.383: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.063685099s Mar 2 00:16:48.386: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.066910628s Mar 2 00:16:50.391: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.072090867s STEP: Saw pod success Mar 2 00:16:50.392: INFO: Pod "client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a" satisfied condition "Succeeded or Failed" Mar 2 00:16:50.394: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a container agnhost-container: STEP: delete the pod Mar 2 00:16:50.412: INFO: Waiting for pod client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a to disappear Mar 2 00:16:50.422: INFO: Pod client-containers-5191e7db-85b6-4a0c-9a26-346669f0f66a no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:50.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9003" for this suite. • [SLOW TEST:44.140 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.818: INFO: >>> kubeConfig: /tmp/kubeconfig-3747375592 STEP: Building a namespace api object, basename disruption W0302 00:15:02.373928 343 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.373: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Mar 2 00:15:04.425: INFO: running pods: 0 < 3 Mar 2 00:15:06.426: INFO: running pods: 0 < 3 Mar 2 00:15:08.427: INFO: running pods: 0 < 3 Mar 2 00:15:10.427: INFO: running pods: 0 < 3 Mar 2 00:15:12.427: INFO: running pods: 0 < 3 Mar 2 00:15:14.427: INFO: running pods: 0 < 3 Mar 2 00:15:16.433: INFO: running pods: 0 < 3 Mar 2 00:15:18.428: INFO: running pods: 0 < 3 Mar 2 00:15:20.428: INFO: running pods: 0 < 3 Mar 2 00:15:22.428: INFO: running pods: 0 < 3 Mar 2 00:15:24.427: INFO: running pods: 0 < 3 Mar 2 00:15:26.427: INFO: running pods: 0 < 3 Mar 2 00:15:28.427: INFO: running pods: 0 < 3 Mar 2 00:15:30.428: INFO: running pods: 0 < 3 Mar 2 00:15:32.427: INFO: running pods: 0 < 3 Mar 2 00:15:34.427: INFO: running pods: 0 < 3 Mar 2 00:15:36.428: INFO: running pods: 0 < 3 Mar 2 00:15:38.429: INFO: running pods: 0 < 3 Mar 2 00:15:40.428: INFO: running pods: 0 < 3 Mar 2 00:15:42.429: INFO: running pods: 0 < 3 Mar 2 00:15:44.428: INFO: running pods: 0 < 3 Mar 2 00:15:46.429: INFO: running pods: 0 < 3 Mar 2 00:15:48.428: INFO: running pods: 0 < 3 Mar 2 00:15:50.428: INFO: running pods: 0 < 3 Mar 2 00:15:52.428: INFO: running pods: 0 < 3 Mar 2 00:15:54.428: INFO: running pods: 0 < 3 Mar 2 00:15:56.428: INFO: running pods: 0 < 3 Mar 2 00:15:58.427: INFO: running pods: 0 < 3 Mar 2 00:16:00.428: INFO: running pods: 0 < 3 Mar 2 00:16:02.428: INFO: running pods: 0 < 3 Mar 2 00:16:04.427: INFO: running pods: 0 < 3 Mar 2 00:16:06.428: INFO: running pods: 0 < 3 Mar 2 00:16:08.428: INFO: running pods: 0 < 3 Mar 2 00:16:10.428: INFO: running pods: 0 < 3 Mar 2 00:16:12.427: INFO: running pods: 0 < 3 Mar 2 00:16:14.428: INFO: running pods: 0 < 3 Mar 2 00:16:16.428: INFO: running pods: 0 < 3 Mar 2 00:16:18.428: INFO: running pods: 1 < 3 Mar 2 00:16:20.428: INFO: running pods: 2 < 3 Mar 2 00:16:22.428: INFO: running pods: 2 < 3 Mar 2 00:16:24.428: INFO: running pods: 2 < 3 Mar 2 00:16:26.428: INFO: running pods: 2 < 3 Mar 2 00:16:28.428: INFO: running pods: 2 < 3 Mar 2 00:16:30.428: INFO: running pods: 2 < 3 Mar 2 00:16:32.428: INFO: running pods: 2 < 3 Mar 2 00:16:34.429: INFO: running pods: 2 < 3 Mar 2 00:16:36.429: INFO: running pods: 2 < 3 Mar 2 00:16:38.429: INFO: running pods: 2 < 3 Mar 2 00:16:40.428: INFO: running pods: 2 < 3 Mar 2 00:16:42.428: INFO: running pods: 2 < 3 Mar 2 00:16:44.428: INFO: running pods: 2 < 3 Mar 2 00:16:46.428: INFO: running pods: 2 < 3 Mar 2 00:16:48.428: INFO: running pods: 2 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:50.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-9391" for this suite. • [SLOW TEST:108.634 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":1,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.867: INFO: >>> kubeConfig: /tmp/kubeconfig-200441725 STEP: Building a namespace api object, basename containers W0302 00:15:02.487533 241 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.487: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test override command Mar 2 00:15:02.531: INFO: Waiting up to 5m0s for pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1" in namespace "containers-3918" to be "Succeeded or Failed" Mar 2 00:15:02.549: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.445087ms Mar 2 00:15:04.551: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01932863s Mar 2 00:15:06.553: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022080183s Mar 2 00:15:08.557: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025352996s Mar 2 00:15:10.560: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028766559s Mar 2 00:15:12.563: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03195143s Mar 2 00:15:14.565: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.03417592s Mar 2 00:15:16.569: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.037567105s Mar 2 00:15:18.572: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.040359415s Mar 2 00:15:20.575: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043440686s Mar 2 00:15:22.578: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.046868405s Mar 2 00:15:24.580: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.049010936s Mar 2 00:15:26.583: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.052088023s Mar 2 00:15:28.586: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.055113493s Mar 2 00:15:30.589: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.058044314s Mar 2 00:15:32.593: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.061489524s Mar 2 00:15:34.596: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.064648291s Mar 2 00:15:36.599: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.06738616s Mar 2 00:15:38.601: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 36.069679607s Mar 2 00:15:40.605: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.073315512s Mar 2 00:15:42.607: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.076094163s Mar 2 00:15:44.609: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 42.078143849s Mar 2 00:15:46.613: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 44.081230336s Mar 2 00:15:48.615: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 46.083897868s Mar 2 00:15:50.619: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 48.087426916s Mar 2 00:15:52.623: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 50.091215177s Mar 2 00:15:54.625: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 52.093410345s Mar 2 00:15:56.627: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 54.095724039s Mar 2 00:15:58.630: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 56.09841104s Mar 2 00:16:00.632: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 58.101166343s Mar 2 00:16:02.635: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.103952215s Mar 2 00:16:04.638: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.106200417s Mar 2 00:16:06.640: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.108634323s Mar 2 00:16:08.643: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.11135868s Mar 2 00:16:10.645: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.113256288s Mar 2 00:16:12.648: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.117082347s Mar 2 00:16:14.650: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.119021725s Mar 2 00:16:16.654: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.122235102s Mar 2 00:16:18.657: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.125278827s Mar 2 00:16:20.660: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.128238222s Mar 2 00:16:22.662: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.131162301s Mar 2 00:16:24.665: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.134066132s Mar 2 00:16:26.670: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.138442046s Mar 2 00:16:28.673: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.141317743s Mar 2 00:16:30.676: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.144497187s Mar 2 00:16:32.679: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.147251513s Mar 2 00:16:34.682: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.150335977s Mar 2 00:16:36.687: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.155197092s Mar 2 00:16:38.692: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.160288064s Mar 2 00:16:40.696: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.164424184s Mar 2 00:16:42.699: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.167916651s Mar 2 00:16:44.702: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.170615662s Mar 2 00:16:46.705: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.173900555s Mar 2 00:16:48.708: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.176336898s Mar 2 00:16:50.713: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m48.181913841s STEP: Saw pod success Mar 2 00:16:50.713: INFO: Pod "client-containers-932589dc-debe-4190-bf7c-a639e1183af1" satisfied condition "Succeeded or Failed" Mar 2 00:16:50.716: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod client-containers-932589dc-debe-4190-bf7c-a639e1183af1 container agnhost-container: STEP: delete the pod Mar 2 00:16:50.730: INFO: Waiting for pod client-containers-932589dc-debe-4190-bf7c-a639e1183af1 to disappear Mar 2 00:16:50.733: INFO: Pod client-containers-932589dc-debe-4190-bf7c-a639e1183af1 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:50.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3918" for this suite. • [SLOW TEST:108.876 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:50.747: INFO: >>> kubeConfig: /tmp/kubeconfig-200441725 STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:50.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9516" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:02.302: INFO: >>> kubeConfig: /tmp/kubeconfig-343363664 STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 2 00:15:02.738: INFO: Waiting up to 5m0s for pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01" in namespace "security-context-3445" to be "Succeeded or Failed" Mar 2 00:15:02.751: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 12.840529ms Mar 2 00:15:04.753: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015374507s Mar 2 00:15:06.756: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017980385s Mar 2 00:15:08.757: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019674634s Mar 2 00:15:10.761: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02315805s Mar 2 00:15:12.763: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024913129s Mar 2 00:15:14.766: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028763415s Mar 2 00:15:16.769: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031463289s Mar 2 00:15:18.772: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 16.033816778s Mar 2 00:15:20.816: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 18.078098271s Mar 2 00:15:22.819: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 20.080959025s Mar 2 00:15:24.821: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 22.083561942s Mar 2 00:15:26.823: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 24.085391632s Mar 2 00:15:28.826: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 26.088165055s Mar 2 00:15:30.830: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 28.091998682s Mar 2 00:15:32.833: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 30.095528453s Mar 2 00:15:34.836: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 32.097836246s Mar 2 00:15:36.839: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 34.101469316s Mar 2 00:15:38.841: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 36.1033657s Mar 2 00:15:40.844: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 38.1062231s Mar 2 00:15:42.847: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 40.109728821s Mar 2 00:15:44.850: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 42.112772636s Mar 2 00:15:46.853: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 44.115570827s Mar 2 00:15:48.855: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 46.117805699s Mar 2 00:15:50.859: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 48.121165287s Mar 2 00:15:52.862: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 50.124509827s Mar 2 00:15:54.866: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 52.128164606s Mar 2 00:15:56.869: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 54.131053012s Mar 2 00:15:58.871: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 56.132861839s Mar 2 00:16:00.873: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 58.135621972s Mar 2 00:16:02.876: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.138324547s Mar 2 00:16:04.879: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.14101522s Mar 2 00:16:06.881: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.143769384s Mar 2 00:16:08.885: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.146990245s Mar 2 00:16:10.887: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.149555041s Mar 2 00:16:12.891: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.153020186s Mar 2 00:16:14.894: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.156032322s Mar 2 00:16:16.897: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.15940383s Mar 2 00:16:18.900: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.162129903s Mar 2 00:16:20.903: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.164903417s Mar 2 00:16:22.906: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.168416926s Mar 2 00:16:24.909: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.171499457s Mar 2 00:16:26.913: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.175486444s Mar 2 00:16:28.916: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.178478964s Mar 2 00:16:30.921: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.183612508s Mar 2 00:16:32.924: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.186352158s Mar 2 00:16:34.927: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.188870929s Mar 2 00:16:36.931: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.192818331s Mar 2 00:16:38.937: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.198954939s Mar 2 00:16:40.941: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.203482924s Mar 2 00:16:42.944: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.206381615s Mar 2 00:16:44.948: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.209903008s Mar 2 00:16:46.950: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.21253325s Mar 2 00:16:48.954: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.215813676s Mar 2 00:16:50.958: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m48.220228855s STEP: Saw pod success Mar 2 00:16:50.958: INFO: Pod "security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01" satisfied condition "Succeeded or Failed" Mar 2 00:16:50.960: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01 container test-container: STEP: delete the pod Mar 2 00:16:50.977: INFO: Waiting for pod security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01 to disappear Mar 2 00:16:50.982: INFO: Pod security-context-55f11a4e-2785-4fb6-8200-9c5e38c12d01 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:50.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3445" for this suite. • [SLOW TEST:108.688 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:06.959: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-test-volume-map-1b4c572e-9240-44a1-b23c-650c8c24ae67 STEP: Creating a pod to test consume configMaps Mar 2 00:16:06.981: INFO: Waiting up to 5m0s for pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58" in namespace "configmap-4180" to be "Succeeded or Failed" Mar 2 00:16:06.984: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 3.579354ms Mar 2 00:16:08.988: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006810725s Mar 2 00:16:10.990: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009622059s Mar 2 00:16:12.994: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012969341s Mar 2 00:16:14.997: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016111454s Mar 2 00:16:16.999: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018518001s Mar 2 00:16:19.003: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 12.021677077s Mar 2 00:16:21.006: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 14.025070027s Mar 2 00:16:23.012: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 16.030701767s Mar 2 00:16:25.014: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 18.032960585s Mar 2 00:16:27.022: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 20.040978123s Mar 2 00:16:29.025: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 22.043683157s Mar 2 00:16:31.028: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 24.046870107s Mar 2 00:16:33.031: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 26.050624294s Mar 2 00:16:35.041: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 28.060124687s Mar 2 00:16:37.044: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 30.063342866s Mar 2 00:16:39.048: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 32.067062566s Mar 2 00:16:41.052: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 34.071030949s Mar 2 00:16:43.058: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 36.077134444s Mar 2 00:16:45.062: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 38.081060256s Mar 2 00:16:47.066: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 40.085381838s Mar 2 00:16:49.070: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Pending", Reason="", readiness=false. Elapsed: 42.088713552s Mar 2 00:16:51.073: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.092363118s STEP: Saw pod success Mar 2 00:16:51.073: INFO: Pod "pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58" satisfied condition "Succeeded or Failed" Mar 2 00:16:51.076: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58 container agnhost-container: STEP: delete the pod Mar 2 00:16:51.088: INFO: Waiting for pod pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58 to disappear Mar 2 00:16:51.092: INFO: Pod pod-configmaps-d32fc043-0df0-48a2-8cff-a2f526d68b58 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:51.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4180" for this suite. • [SLOW TEST:44.139 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":132,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:15.584: INFO: >>> kubeConfig: /tmp/kubeconfig-1263463171 STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod pod-subpath-test-configmap-4f9d STEP: Creating a pod to test atomic-volume-subpath Mar 2 00:16:15.606: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4f9d" in namespace "subpath-6792" to be "Succeeded or Failed" Mar 2 00:16:15.608: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.668296ms Mar 2 00:16:17.611: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004995103s Mar 2 00:16:19.614: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008102704s Mar 2 00:16:21.617: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010401902s Mar 2 00:16:23.621: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014770096s Mar 2 00:16:25.624: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018108917s Mar 2 00:16:27.628: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.021645763s Mar 2 00:16:29.631: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.024441704s Mar 2 00:16:31.635: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.028714007s Mar 2 00:16:33.638: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.031594037s Mar 2 00:16:35.642: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.03566126s Mar 2 00:16:37.645: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.039197877s Mar 2 00:16:39.650: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.04407649s Mar 2 00:16:41.653: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.046811545s Mar 2 00:16:43.656: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.050350415s Mar 2 00:16:45.659: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.053109435s Mar 2 00:16:47.662: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Running", Reason="", readiness=true. Elapsed: 32.056352614s Mar 2 00:16:49.665: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Running", Reason="", readiness=true. Elapsed: 34.05888355s Mar 2 00:16:51.669: INFO: Pod "pod-subpath-test-configmap-4f9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.062967804s STEP: Saw pod success Mar 2 00:16:51.669: INFO: Pod "pod-subpath-test-configmap-4f9d" satisfied condition "Succeeded or Failed" Mar 2 00:16:51.671: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-subpath-test-configmap-4f9d container test-container-subpath-configmap-4f9d: STEP: delete the pod Mar 2 00:16:51.686: INFO: Waiting for pod pod-subpath-test-configmap-4f9d to disappear Mar 2 00:16:51.690: INFO: Pod pod-subpath-test-configmap-4f9d no longer exists STEP: Deleting pod pod-subpath-test-configmap-4f9d Mar 2 00:16:51.690: INFO: Deleting pod "pod-subpath-test-configmap-4f9d" in namespace "subpath-6792" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:51.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6792" for this suite. • [SLOW TEST:36.114 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":2,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:41.555: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name projected-configmap-test-volume-map-00c95cea-b60f-4215-886b-49a69262822b STEP: Creating a pod to test consume configMaps Mar 2 00:16:41.580: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c" in namespace "projected-8894" to be "Succeeded or Failed" Mar 2 00:16:41.585: INFO: Pod "pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.280543ms Mar 2 00:16:43.589: INFO: Pod "pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009156252s Mar 2 00:16:45.593: INFO: Pod "pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013222402s Mar 2 00:16:47.596: INFO: Pod "pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016535724s Mar 2 00:16:49.599: INFO: Pod "pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018883823s Mar 2 00:16:51.602: INFO: Pod "pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022506601s Mar 2 00:16:53.605: INFO: Pod "pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025284305s STEP: Saw pod success Mar 2 00:16:53.605: INFO: Pod "pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c" satisfied condition "Succeeded or Failed" Mar 2 00:16:53.607: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c container agnhost-container: STEP: delete the pod Mar 2 00:16:53.622: INFO: Waiting for pod pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c to disappear Mar 2 00:16:53.626: INFO: Pod pod-projected-configmaps-cc4b7b8d-36b9-492a-9247-1aa57d68c20c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:53.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8894" for this suite. • [SLOW TEST:12.078 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":49,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:53.644: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:53.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5247" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":3,"skipped":63,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.768: INFO: >>> kubeConfig: /tmp/kubeconfig-3970272666 STEP: Building a namespace api object, basename init-container W0302 00:15:02.150664 298 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.150: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod Mar 2 00:15:02.154: INFO: PodSpec: initContainers in spec.initContainers Mar 2 00:16:53.781: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7d5f26fd-d35f-4364-9627-eb37b947f978", GenerateName:"", Namespace:"init-container-4955", SelfLink:"", UID:"b3898834-a3d9-4bd6-89f8-58da2b1987c3", ResourceVersion:"7768", Generation:0, CreationTimestamp:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"154830118"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004aefed8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"k3s", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.March, 2, 0, 16, 24, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004aeff08), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-f4lhh", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc004669d20), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-f4lhh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-f4lhh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-f4lhh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0046a4668), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k3s-agent-1-mid5l7", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003425030), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0046a46e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0046a4700)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0046a4708), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0046a470c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc000b90fa0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.42.1.21", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.42.1.21"}}, StartTime:time.Date(2023, time.March, 2, 0, 15, 2, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003425110)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003425180)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://5e651a3224bd474f03765a3808113fda750e5710496d6da92164a028fe7f5570", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004669da0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004669d80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc0046a478f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:53.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4955" for this suite. • [SLOW TEST:112.020 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:32.366: INFO: >>> kubeConfig: /tmp/kubeconfig-3471101731 STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:32.387: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:34.391: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:36.394: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:38.391: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:40.393: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:42.390: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:44.390: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:46.391: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:48.391: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:50.392: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:52.391: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:54.390: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:56.392: INFO: The status of Pod busybox-host-aliasescdd2ffce-c390-4ce4-a93f-3dd5aa788289 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:56.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7171" for this suite. • [SLOW TEST:24.051 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:50.456: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 2 00:16:50.509: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 2 00:16:55.519: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:56.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6225" for this suite. • [SLOW TEST:6.095 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:36.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1801756564 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name projected-configmap-test-volume-a24ae97f-5c38-4819-ba87-2df4f8793708 STEP: Creating a pod to test consume configMaps Mar 2 00:16:36.677: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b" in namespace "projected-9727" to be "Succeeded or Failed" Mar 2 00:16:36.680: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.558225ms Mar 2 00:16:38.682: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005336898s Mar 2 00:16:40.686: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008938613s Mar 2 00:16:42.690: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013190192s Mar 2 00:16:44.694: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01674153s Mar 2 00:16:46.698: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020855998s Mar 2 00:16:48.701: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024144488s Mar 2 00:16:50.704: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026810055s Mar 2 00:16:52.707: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.030131747s Mar 2 00:16:54.711: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.033639832s Mar 2 00:16:56.716: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.038784654s STEP: Saw pod success Mar 2 00:16:56.716: INFO: Pod "pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b" satisfied condition "Succeeded or Failed" Mar 2 00:16:56.719: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b container projected-configmap-volume-test: STEP: delete the pod Mar 2 00:16:56.734: INFO: Waiting for pod pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b to disappear Mar 2 00:16:56.737: INFO: Pod pod-projected-configmaps-c3079c7a-b42f-4607-b5b2-e4160347890b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:56.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9727" for this suite. • [SLOW TEST:20.096 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:06.551: INFO: >>> kubeConfig: /tmp/kubeconfig-3825818549 STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod pod-subpath-test-configmap-vnfk STEP: Creating a pod to test atomic-volume-subpath Mar 2 00:16:06.574: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vnfk" in namespace "subpath-818" to be "Succeeded or Failed" Mar 2 00:16:06.576: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 1.619293ms Mar 2 00:16:08.579: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004621427s Mar 2 00:16:10.582: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00807092s Mar 2 00:16:12.584: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010430575s Mar 2 00:16:14.587: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013494348s Mar 2 00:16:16.590: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016509248s Mar 2 00:16:18.594: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.019642432s Mar 2 00:16:20.597: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.022679615s Mar 2 00:16:22.600: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 16.025966722s Mar 2 00:16:24.605: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 18.031339659s Mar 2 00:16:26.607: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 20.033561624s Mar 2 00:16:28.611: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 22.036807013s Mar 2 00:16:30.614: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 24.039980094s Mar 2 00:16:32.617: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 26.042731515s Mar 2 00:16:34.620: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 28.046254801s Mar 2 00:16:36.625: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 30.051064699s Mar 2 00:16:38.629: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 32.054973697s Mar 2 00:16:40.634: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 34.059701209s Mar 2 00:16:42.639: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 36.064604354s Mar 2 00:16:44.641: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 38.067240656s Mar 2 00:16:46.645: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 40.07097436s Mar 2 00:16:48.648: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 42.073735068s Mar 2 00:16:50.652: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 44.078556917s Mar 2 00:16:52.656: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 46.082496793s Mar 2 00:16:54.661: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 48.086695418s Mar 2 00:16:56.665: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Running", Reason="", readiness=true. Elapsed: 50.09070796s Mar 2 00:16:58.669: INFO: Pod "pod-subpath-test-configmap-vnfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.094743716s STEP: Saw pod success Mar 2 00:16:58.669: INFO: Pod "pod-subpath-test-configmap-vnfk" satisfied condition "Succeeded or Failed" Mar 2 00:16:58.672: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-subpath-test-configmap-vnfk container test-container-subpath-configmap-vnfk: STEP: delete the pod Mar 2 00:16:58.685: INFO: Waiting for pod pod-subpath-test-configmap-vnfk to disappear Mar 2 00:16:58.689: INFO: Pod pod-subpath-test-configmap-vnfk no longer exists STEP: Deleting pod pod-subpath-test-configmap-vnfk Mar 2 00:16:58.689: INFO: Deleting pod "pod-subpath-test-configmap-vnfk" in namespace "subpath-818" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:58.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-818" for this suite. • [SLOW TEST:52.150 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":2,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:21.662: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:16:21.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672" in namespace "projected-1773" to be "Succeeded or Failed" Mar 2 00:16:21.682: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 3.718006ms Mar 2 00:16:23.685: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00694778s Mar 2 00:16:25.689: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011356783s Mar 2 00:16:27.691: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013461952s Mar 2 00:16:29.694: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016221644s Mar 2 00:16:31.698: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019757409s Mar 2 00:16:33.702: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 12.023667425s Mar 2 00:16:35.706: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 14.028153133s Mar 2 00:16:37.710: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 16.032023453s Mar 2 00:16:39.718: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 18.039695239s Mar 2 00:16:41.723: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 20.045029197s Mar 2 00:16:43.726: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 22.048221579s Mar 2 00:16:45.733: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 24.055099628s Mar 2 00:16:47.736: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 26.05835s Mar 2 00:16:49.739: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 28.061259916s Mar 2 00:16:51.742: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 30.064376154s Mar 2 00:16:53.745: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 32.066757145s Mar 2 00:16:55.757: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 34.079302258s Mar 2 00:16:57.760: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Pending", Reason="", readiness=false. Elapsed: 36.081687837s Mar 2 00:16:59.763: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.084688654s STEP: Saw pod success Mar 2 00:16:59.763: INFO: Pod "downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672" satisfied condition "Succeeded or Failed" Mar 2 00:16:59.765: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672 container client-container: STEP: delete the pod Mar 2 00:16:59.777: INFO: Waiting for pod downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672 to disappear Mar 2 00:16:59.781: INFO: Pod downwardapi-volume-b31d640b-f1c1-4388-9598-3319fada6672 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:16:59.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1773" for this suite. • [SLOW TEST:38.126 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":134,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:51.014: INFO: >>> kubeConfig: /tmp/kubeconfig-343363664 STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:02.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-746" for this suite. • [SLOW TEST:11.063 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":3,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.764: INFO: >>> kubeConfig: /tmp/kubeconfig-2645107774 STEP: Building a namespace api object, basename cronjob W0302 00:15:02.067870 211 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.067: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:02.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1335" for this suite. • [SLOW TEST:120.344 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:39.114: INFO: >>> kubeConfig: /tmp/kubeconfig-1722668237 STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:39.149: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 2 00:16:44.151: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 2 00:17:02.160: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Mar 2 00:17:02.176: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7049 0942e3fd-b574-4073-843c-cae4096069b2 8108 1 2023-03-02 00:17:02 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-03-02 00:17:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00509a3c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 2 00:17:02.179: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Mar 2 00:17:02.179: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 2 00:17:02.179: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7049 6f009b88-5c38-4ed4-8bae-f5c4a743ef5d 8109 1 2023-03-02 00:16:39 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 0942e3fd-b574-4073-843c-cae4096069b2 0xc004fbc50f 0xc004fbc520}] [] [{e2e.test Update apps/v1 2023-03-02 00:16:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:17:00 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {k3s Update apps/v1 2023-03-02 00:17:02 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"0942e3fd-b574-4073-843c-cae4096069b2\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004fbc5e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:17:02.184: INFO: Pod "test-cleanup-controller-25pgd" is available: &Pod{ObjectMeta:{test-cleanup-controller-25pgd test-cleanup-controller- deployment-7049 51ec28a0-d430-495d-bff9-8bbf295bf15d 8036 0 2023-03-02 00:16:39 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 6f009b88-5c38-4ed4-8bae-f5c4a743ef5d 0xc004fbca2f 0xc004fbca40}] [] [{k3s Update v1 2023-03-02 00:16:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f009b88-5c38-4ed4-8bae-f5c4a743ef5d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:17:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-csfcn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-csfcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.142,StartTime:2023-03-02 00:16:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:16:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://2b14826c1165f36bdf9b3834cbfcc6df109e69b94a87ae7230987a4cb1a429bd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:02.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7049" for this suite. • [SLOW TEST:23.096 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":4,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:02.223: INFO: >>> kubeConfig: /tmp/kubeconfig-1722668237 STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 2 00:17:02.272: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Mar 2 00:17:02.275: INFO: starting watch STEP: patching STEP: updating Mar 2 00:17:02.285: INFO: waiting for watch events with expected annotations Mar 2 00:17:02.285: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:02.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-1333" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":5,"skipped":100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.868: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 STEP: Building a namespace api object, basename pod-network-test W0302 00:15:02.520179 205 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.520: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Performing setup for networking test in namespace pod-network-test-925 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 2 00:15:02.523: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 2 00:15:02.636: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:04.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:06.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:08.641: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:10.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:12.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:14.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:16.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:18.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:20.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:22.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:24.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:26.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:28.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:30.641: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:32.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:34.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:36.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:38.638: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:40.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:42.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:44.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:46.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:48.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:50.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:52.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:54.638: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:56.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:58.638: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:00.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:02.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:04.638: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:06.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:08.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:10.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:12.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:14.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:16.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:18.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:20.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:22.640: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 2 00:16:22.644: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 2 00:17:02.663: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 2 00:17:02.663: INFO: Breadth first check of 10.42.0.37 on host 172.17.0.12... Mar 2 00:17:02.665: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.1.114:9080/dial?request=hostname&protocol=udp&host=10.42.0.37&port=8081&tries=1'] Namespace:pod-network-test-925 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:17:02.665: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 Mar 2 00:17:02.666: INFO: ExecWithOptions: Clientset creation Mar 2 00:17:02.666: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/pod-network-test-925/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.42.1.114%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.42.0.37%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:17:02.726: INFO: Waiting for responses: map[] Mar 2 00:17:02.726: INFO: reached 10.42.0.37 after 0/1 tries Mar 2 00:17:02.726: INFO: Breadth first check of 10.42.1.35 on host 172.17.0.8... Mar 2 00:17:02.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.1.114:9080/dial?request=hostname&protocol=udp&host=10.42.1.35&port=8081&tries=1'] Namespace:pod-network-test-925 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:17:02.728: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 Mar 2 00:17:02.728: INFO: ExecWithOptions: Clientset creation Mar 2 00:17:02.728: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/pod-network-test-925/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.42.1.114%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.42.1.35%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:17:02.756: INFO: Waiting for responses: map[] Mar 2 00:17:02.756: INFO: reached 10.42.1.35 after 0/1 tries Mar 2 00:17:02.756: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:02.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-925" for this suite. • [SLOW TEST:120.896 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:28.918: INFO: >>> kubeConfig: /tmp/kubeconfig-3226875953 STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:28.933: INFO: Creating simple deployment test-new-deployment Mar 2 00:16:28.944: INFO: deployment "test-new-deployment" doesn't have the required revision set Mar 2 00:16:30.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:32.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:34.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:36.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:38.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:40.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:42.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:44.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:46.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:48.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:50.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:52.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:54.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:56.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:58.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:00.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Mar 2 00:17:02.980: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-7319 5fb6e85a-e163-4815-a12b-173139954f73 8164 3 2023-03-02 00:16:28 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:17:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001195518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-03-02 00:17:01 +0000 UTC,LastTransitionTime:2023-03-02 00:17:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-5d9fdcc779" has successfully progressed.,LastUpdateTime:2023-03-02 00:17:01 +0000 UTC,LastTransitionTime:2023-03-02 00:16:28 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 2 00:17:02.983: INFO: New ReplicaSet "test-new-deployment-5d9fdcc779" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-5d9fdcc779 deployment-7319 cf6187b6-e835-4951-9fc7-360ee7448124 8163 2 2023-03-02 00:16:28 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 5fb6e85a-e163-4815-a12b-173139954f73 0xc000a561d0 0xc000a561d1}] [] [{k3s Update apps/v1 2023-03-02 00:16:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5fb6e85a-e163-4815-a12b-173139954f73\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:17:01 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000a56268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:17:02.988: INFO: Pod "test-new-deployment-5d9fdcc779-kmp8s" is available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-kmp8s test-new-deployment-5d9fdcc779- deployment-7319 14b208da-f73e-41d6-8039-407f30dbc496 8048 0 2023-03-02 00:16:29 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 cf6187b6-e835-4951-9fc7-360ee7448124 0xc000a56620 0xc000a56621}] [] [{k3s Update v1 2023-03-02 00:16:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf6187b6-e835-4951-9fc7-360ee7448124\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:17:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.136\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6gwll,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6gwll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:16:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.136,StartTime:2023-03-02 00:16:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:16:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://2d38df739c36a300385242b008af6f01bb9a1135630db58028d8f071a1803fec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:02.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7319" for this suite. • [SLOW TEST:34.083 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":2,"skipped":116,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:22.997: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:16:23.018: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d" in namespace "downward-api-7293" to be "Succeeded or Failed" Mar 2 00:16:23.020: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.872995ms Mar 2 00:16:25.023: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005012023s Mar 2 00:16:27.027: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009123788s Mar 2 00:16:29.030: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011819496s Mar 2 00:16:31.034: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016092176s Mar 2 00:16:33.037: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018417591s Mar 2 00:16:35.046: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027415129s Mar 2 00:16:37.050: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031487178s Mar 2 00:16:39.055: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036850249s Mar 2 00:16:41.059: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.040426217s Mar 2 00:16:43.067: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.048884992s Mar 2 00:16:45.070: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.051738609s Mar 2 00:16:47.072: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.054232312s Mar 2 00:16:49.075: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.056647005s Mar 2 00:16:51.079: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.06056479s Mar 2 00:16:53.083: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.064274781s Mar 2 00:16:55.086: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.067742311s Mar 2 00:16:57.089: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.070692028s Mar 2 00:16:59.092: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.073275099s Mar 2 00:17:01.095: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.076749591s Mar 2 00:17:03.099: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.080853567s STEP: Saw pod success Mar 2 00:17:03.099: INFO: Pod "downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d" satisfied condition "Succeeded or Failed" Mar 2 00:17:03.102: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d container client-container: STEP: delete the pod Mar 2 00:17:03.116: INFO: Waiting for pod downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d to disappear Mar 2 00:17:03.119: INFO: Pod downwardapi-volume-ae2b8c8d-ccce-49a7-a494-53262d08ad0d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:03.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7293" for this suite. • [SLOW TEST:40.129 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":60,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:56.758: INFO: >>> kubeConfig: /tmp/kubeconfig-1801756564 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:16:56.783: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3" in namespace "projected-4334" to be "Succeeded or Failed" Mar 2 00:16:56.788: INFO: Pod "downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.980163ms Mar 2 00:16:58.793: INFO: Pod "downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009615527s Mar 2 00:17:00.797: INFO: Pod "downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013774236s Mar 2 00:17:02.802: INFO: Pod "downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019268531s Mar 2 00:17:04.806: INFO: Pod "downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.023243229s STEP: Saw pod success Mar 2 00:17:04.806: INFO: Pod "downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3" satisfied condition "Succeeded or Failed" Mar 2 00:17:04.809: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3 container client-container: STEP: delete the pod Mar 2 00:17:04.822: INFO: Waiting for pod downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3 to disappear Mar 2 00:17:04.826: INFO: Pod downwardapi-volume-a5c29795-bb66-49da-a375-1f6b8e073bf3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:04.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4334" for this suite. • [SLOW TEST:8.075 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":64,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:29.960: INFO: >>> kubeConfig: /tmp/kubeconfig-178066528 STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 2 00:17:00.498: INFO: Successfully updated pod "adopt-release-l66dn" STEP: Checking that the Job readopts the Pod Mar 2 00:17:00.498: INFO: Waiting up to 15m0s for pod "adopt-release-l66dn" in namespace "job-1146" to be "adopted" Mar 2 00:17:00.501: INFO: Pod "adopt-release-l66dn": Phase="Running", Reason="", readiness=true. Elapsed: 2.794133ms Mar 2 00:17:02.506: INFO: Pod "adopt-release-l66dn": Phase="Running", Reason="", readiness=true. Elapsed: 2.008033091s Mar 2 00:17:02.506: INFO: Pod "adopt-release-l66dn" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 2 00:17:03.014: INFO: Successfully updated pod "adopt-release-l66dn" STEP: Checking that the Job releases the Pod Mar 2 00:17:03.014: INFO: Waiting up to 15m0s for pod "adopt-release-l66dn" in namespace "job-1146" to be "released" Mar 2 00:17:03.016: INFO: Pod "adopt-release-l66dn": Phase="Running", Reason="", readiness=true. Elapsed: 2.743958ms Mar 2 00:17:05.019: INFO: Pod "adopt-release-l66dn": Phase="Running", Reason="", readiness=true. Elapsed: 2.005774296s Mar 2 00:17:05.019: INFO: Pod "adopt-release-l66dn" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:05.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1146" for this suite. • [SLOW TEST:35.067 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:28.937: INFO: >>> kubeConfig: /tmp/kubeconfig-1867053252 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 2 00:16:28.965: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:06.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2702" for this suite. • [SLOW TEST:37.763 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:46.987: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:16:47.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617" in namespace "downward-api-542" to be "Succeeded or Failed" Mar 2 00:16:47.011: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374382ms Mar 2 00:16:49.014: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007664216s Mar 2 00:16:51.018: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011382442s Mar 2 00:16:53.022: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015121719s Mar 2 00:16:55.025: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019073848s Mar 2 00:16:57.029: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022505259s Mar 2 00:16:59.033: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026679269s Mar 2 00:17:01.037: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 14.030767085s Mar 2 00:17:03.042: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 16.035914912s Mar 2 00:17:05.047: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Pending", Reason="", readiness=false. Elapsed: 18.04061542s Mar 2 00:17:07.050: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.043482096s STEP: Saw pod success Mar 2 00:17:07.050: INFO: Pod "downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617" satisfied condition "Succeeded or Failed" Mar 2 00:17:07.052: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617 container client-container: STEP: delete the pod Mar 2 00:17:07.065: INFO: Waiting for pod downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617 to disappear Mar 2 00:17:07.068: INFO: Pod downwardapi-volume-e8b1be11-b67c-43ba-9ab8-951fae1ff617 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:07.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-542" for this suite. • [SLOW TEST:20.088 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.986: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 STEP: Building a namespace api object, basename configmap W0302 00:15:02.666018 208 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.666: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name cm-test-opt-del-9423e2d8-5dcf-457c-9137-d192535d805e STEP: Creating configMap with name cm-test-opt-upd-ebfc0d72-77aa-4103-867c-cb5965f0e977 STEP: Creating the pod Mar 2 00:15:02.779: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:04.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:06.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:08.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:10.783: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:12.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:14.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:16.783: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:18.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:20.834: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:22.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:24.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:26.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:28.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:30.784: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:32.783: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:34.783: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:36.783: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:38.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:40.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:42.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:44.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:46.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:48.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:50.783: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:52.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:54.783: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:56.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:58.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:00.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:02.783: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:04.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:06.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:08.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:10.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:12.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:14.781: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:16.782: INFO: The status of Pod pod-configmaps-67c94103-4276-485d-811e-0e0078233436 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-9423e2d8-5dcf-457c-9137-d192535d805e STEP: Updating configmap cm-test-opt-upd-ebfc0d72-77aa-4103-867c-cb5965f0e977 STEP: Creating configMap with name cm-test-opt-create-16bbc90c-b62f-4884-8ebc-276f8ef2bde8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:07.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2461" for this suite. • [SLOW TEST:125.615 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:07.607: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 2 00:17:07.663: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3773 a2554f60-3ec8-4352-895a-6986c59c2d14 8361 0 2023-03-02 00:17:07 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-03-02 00:17:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:17:07.663: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3773 a2554f60-3ec8-4352-895a-6986c59c2d14 8362 0 2023-03-02 00:17:07 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-03-02 00:17:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:07.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3773" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:51.723: INFO: >>> kubeConfig: /tmp/kubeconfig-1263463171 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating projection with secret that has name projected-secret-test-cff14e30-62cb-4828-a089-680e08ccda06 STEP: Creating a pod to test consume secrets Mar 2 00:16:51.749: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1" in namespace "projected-7770" to be "Succeeded or Failed" Mar 2 00:16:51.752: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.017137ms Mar 2 00:16:53.755: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00538293s Mar 2 00:16:55.761: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011685424s Mar 2 00:16:57.765: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015414283s Mar 2 00:16:59.769: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01942661s Mar 2 00:17:01.774: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024850545s Mar 2 00:17:03.778: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028806764s Mar 2 00:17:05.781: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.032032346s Mar 2 00:17:07.786: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.036958325s STEP: Saw pod success Mar 2 00:17:07.786: INFO: Pod "pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1" satisfied condition "Succeeded or Failed" Mar 2 00:17:07.790: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1 container projected-secret-volume-test: STEP: delete the pod Mar 2 00:17:07.805: INFO: Waiting for pod pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1 to disappear Mar 2 00:17:07.809: INFO: Pod pod-projected-secrets-beda16e6-e32f-4a43-aa26-aac1d1fb7ca1 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:07.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7770" for this suite. • [SLOW TEST:16.095 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:07.829: INFO: >>> kubeConfig: /tmp/kubeconfig-1263463171 STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:07.854: INFO: Endpoints addresses: [172.17.0.12] , ports: [6443] Mar 2 00:17:07.854: INFO: EndpointSlices addresses: [172.17.0.12] , ports: [6443] [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:07.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8507" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":4,"skipped":140,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.766: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 STEP: Building a namespace api object, basename pod-network-test W0302 00:15:02.115422 204 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.115: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Performing setup for networking test in namespace pod-network-test-1529 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 2 00:15:02.118: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 2 00:15:02.304: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:04.306: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:06.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:08.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:10.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:12.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:14.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:16.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:18.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:20.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:22.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:24.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:26.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:28.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:30.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:32.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:34.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:36.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:38.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:40.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:42.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:44.306: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:46.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:48.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:50.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:52.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:54.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:56.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:58.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:00.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:02.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:04.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:06.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:08.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:10.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:12.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:14.307: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 2 00:16:14.311: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:16.314: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:18.314: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:20.314: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:22.313: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:24.314: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:26.314: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:28.316: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 2 00:17:08.347: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 2 00:17:08.347: INFO: Going to poll 10.42.0.19 on port 8083 at least 0 times, with a maximum of 34 tries before failing Mar 2 00:17:08.349: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.42.0.19:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1529 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:17:08.349: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 Mar 2 00:17:08.350: INFO: ExecWithOptions: Clientset creation Mar 2 00:17:08.350: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/pod-network-test-1529/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.42.0.19%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:17:08.402: INFO: Found all 1 expected endpoints: [netserver-0] Mar 2 00:17:08.402: INFO: Going to poll 10.42.1.27 on port 8083 at least 0 times, with a maximum of 34 tries before failing Mar 2 00:17:08.406: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.42.1.27:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1529 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:17:08.407: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 Mar 2 00:17:08.407: INFO: ExecWithOptions: Clientset creation Mar 2 00:17:08.407: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/pod-network-test-1529/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.42.1.27%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:17:08.440: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:08.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1529" for this suite. • [SLOW TEST:126.683 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:53.700: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Mar 2 00:16:55.741: INFO: running pods: 0 < 1 Mar 2 00:16:57.744: INFO: running pods: 0 < 1 Mar 2 00:16:59.744: INFO: running pods: 0 < 1 Mar 2 00:17:01.745: INFO: running pods: 0 < 1 Mar 2 00:17:03.745: INFO: running pods: 0 < 1 Mar 2 00:17:05.743: INFO: running pods: 0 < 1 Mar 2 00:17:07.744: INFO: running pods: 0 < 1 Mar 2 00:17:09.744: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:11.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-348" for this suite. • [SLOW TEST:18.078 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":4,"skipped":66,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:20.145: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:16:20.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7" in namespace "downward-api-7991" to be "Succeeded or Failed" Mar 2 00:16:20.164: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.517508ms Mar 2 00:16:22.166: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004997532s Mar 2 00:16:24.169: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007533271s Mar 2 00:16:26.172: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010672396s Mar 2 00:16:28.176: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014552837s Mar 2 00:16:30.182: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020834426s Mar 2 00:16:32.185: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.023726943s Mar 2 00:16:34.188: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.027414338s Mar 2 00:16:36.192: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.030531431s Mar 2 00:16:38.195: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.034059384s Mar 2 00:16:40.200: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.038520528s Mar 2 00:16:42.203: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.041742961s Mar 2 00:16:44.206: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.044760143s Mar 2 00:16:46.209: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.04825899s Mar 2 00:16:48.219: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.058126324s Mar 2 00:16:50.223: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.061714851s Mar 2 00:16:52.227: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.065487867s Mar 2 00:16:54.229: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.068256787s Mar 2 00:16:56.244: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.082824732s Mar 2 00:16:58.248: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.086769704s Mar 2 00:17:00.252: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.090534363s Mar 2 00:17:02.256: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.095136941s Mar 2 00:17:04.260: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.098815886s Mar 2 00:17:06.264: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 46.102641269s Mar 2 00:17:08.270: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 48.109249933s Mar 2 00:17:10.274: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 50.113068171s Mar 2 00:17:12.278: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.116447455s STEP: Saw pod success Mar 2 00:17:12.278: INFO: Pod "downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7" satisfied condition "Succeeded or Failed" Mar 2 00:17:12.280: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7 container client-container: STEP: delete the pod Mar 2 00:17:12.295: INFO: Waiting for pod downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7 to disappear Mar 2 00:17:12.300: INFO: Pod downwardapi-volume-c780e159-c1af-43f0-a741-bf59531a6ff7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:12.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7991" for this suite. • [SLOW TEST:52.163 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:40.495: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name secret-test-8263fd2a-eb8c-42d5-9127-9aaff19a97c1 STEP: Creating a pod to test consume secrets Mar 2 00:16:40.524: INFO: Waiting up to 5m0s for pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5" in namespace "secrets-1390" to be "Succeeded or Failed" Mar 2 00:16:40.526: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.607078ms Mar 2 00:16:42.530: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006233379s Mar 2 00:16:44.533: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009352236s Mar 2 00:16:46.536: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012083057s Mar 2 00:16:48.539: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015778437s Mar 2 00:16:50.545: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020805737s Mar 2 00:16:52.548: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024175068s Mar 2 00:16:54.551: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.027704213s Mar 2 00:16:56.557: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.032948733s Mar 2 00:16:58.560: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.036397203s Mar 2 00:17:00.564: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.039944601s Mar 2 00:17:02.567: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.043460238s Mar 2 00:17:04.570: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.04648869s Mar 2 00:17:06.574: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.049988706s Mar 2 00:17:08.578: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.053939728s Mar 2 00:17:10.581: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.057273988s Mar 2 00:17:12.584: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.059907428s STEP: Saw pod success Mar 2 00:17:12.584: INFO: Pod "pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5" satisfied condition "Succeeded or Failed" Mar 2 00:17:12.586: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5 container secret-volume-test: STEP: delete the pod Mar 2 00:17:12.601: INFO: Waiting for pod pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5 to disappear Mar 2 00:17:12.609: INFO: Pod pod-secrets-4f2fcef2-c1b2-4299-9b38-110605fdb2f5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:12.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1390" for this suite. • [SLOW TEST:32.121 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":60,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:03.129: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:14.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6331" for this suite. • [SLOW TEST:11.152 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":4,"skipped":64,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:45.699: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating the pod Mar 2 00:16:45.735: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:47.738: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:49.738: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:51.740: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:53.738: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:55.745: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:57.739: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:59.738: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:01.739: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:03.738: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:05.738: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:07.740: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:09.738: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:11.738: INFO: The status of Pod annotationupdate481e230a-a45e-48b8-894d-3981f2785933 is Running (Ready = true) Mar 2 00:17:12.262: INFO: Successfully updated pod "annotationupdate481e230a-a45e-48b8-894d-3981f2785933" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:14.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-961" for this suite. • [SLOW TEST:28.587 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:38.701: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name secret-test-25b62d5b-9b94-4c79-98cf-507b1baa96a7 STEP: Creating a pod to test consume secrets Mar 2 00:16:38.752: INFO: Waiting up to 5m0s for pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a" in namespace "secrets-904" to be "Succeeded or Failed" Mar 2 00:16:38.755: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.15595ms Mar 2 00:16:40.760: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008181327s Mar 2 00:16:42.763: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011607389s Mar 2 00:16:44.768: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015877843s Mar 2 00:16:46.772: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020586278s Mar 2 00:16:48.776: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024238318s Mar 2 00:16:50.783: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.03113657s Mar 2 00:16:52.786: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.034492106s Mar 2 00:16:54.789: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.037430501s Mar 2 00:16:56.796: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043987723s Mar 2 00:16:58.801: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.048960659s Mar 2 00:17:00.804: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.052647973s Mar 2 00:17:02.808: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.056587436s Mar 2 00:17:04.813: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.060930725s Mar 2 00:17:06.816: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.06413984s Mar 2 00:17:08.819: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.066865868s Mar 2 00:17:10.822: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.07036909s Mar 2 00:17:12.826: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.073832095s Mar 2 00:17:14.830: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.078037268s STEP: Saw pod success Mar 2 00:17:14.830: INFO: Pod "pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a" satisfied condition "Succeeded or Failed" Mar 2 00:17:14.832: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a container secret-volume-test: STEP: delete the pod Mar 2 00:17:14.852: INFO: Waiting for pod pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a to disappear Mar 2 00:17:14.859: INFO: Pod pod-secrets-7ecca040-5a39-4fc5-8824-6668789a0b9a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:14.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-904" for this suite. STEP: Destroying namespace "secret-namespace-1220" for this suite. • [SLOW TEST:36.182 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":92,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:14.290: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: getting the auto-created API token Mar 2 00:17:14.840: INFO: created pod pod-service-account-defaultsa Mar 2 00:17:14.840: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 2 00:17:14.845: INFO: created pod pod-service-account-mountsa Mar 2 00:17:14.845: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 2 00:17:14.850: INFO: created pod pod-service-account-nomountsa Mar 2 00:17:14.850: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 2 00:17:14.855: INFO: created pod pod-service-account-defaultsa-mountspec Mar 2 00:17:14.855: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 2 00:17:14.860: INFO: created pod pod-service-account-mountsa-mountspec Mar 2 00:17:14.860: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 2 00:17:14.866: INFO: created pod pod-service-account-nomountsa-mountspec Mar 2 00:17:14.866: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 2 00:17:14.871: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 2 00:17:14.871: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 2 00:17:14.878: INFO: created pod pod-service-account-mountsa-nomountspec Mar 2 00:17:14.878: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 2 00:17:14.885: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 2 00:17:14.885: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:14.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2471" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":5,"skipped":79,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:50.471: INFO: >>> kubeConfig: /tmp/kubeconfig-3747375592 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:16:50.784: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:16:52.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:54.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:56.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:16:58.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:00.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:02.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:04.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:06.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:08.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:10.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:12.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 16, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:17:15.804: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:15.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6178" for this suite. STEP: Destroying namespace "webhook-6178-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.388 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":2,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:56.557: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:56.619: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:58.622: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:00.622: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:02.622: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:04.622: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:06.622: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:08.623: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:10.623: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:12.626: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:14.623: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:16.624: INFO: The status of Pod pod-secrets-0d2be0da-3475-427a-9024-d1dfe87dcea5 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:16.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-729" for this suite. • [SLOW TEST:20.094 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":4,"skipped":63,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:59.794: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Given a Pod with a 'name' label pod-adoption is created Mar 2 00:16:59.828: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:01.834: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:03.832: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:05.833: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:07.834: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:09.831: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:11.832: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:13.832: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:15.833: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:16.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6213" for this suite. • [SLOW TEST:17.065 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":5,"skipped":143,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.940: INFO: >>> kubeConfig: /tmp/kubeconfig-2889160273 STEP: Building a namespace api object, basename replicaset W0302 00:15:02.651075 359 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.651: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:15:02.733: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 2 00:15:07.735: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Mar 2 00:16:25.749: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Mar 2 00:16:25.767: INFO: observed ReplicaSet test-rs in namespace replicaset-6179 with ReadyReplicas 1, AvailableReplicas 1 Mar 2 00:16:25.815: INFO: observed ReplicaSet test-rs in namespace replicaset-6179 with ReadyReplicas 1, AvailableReplicas 1 Mar 2 00:16:25.848: INFO: observed ReplicaSet test-rs in namespace replicaset-6179 with ReadyReplicas 1, AvailableReplicas 1 Mar 2 00:16:25.872: INFO: observed ReplicaSet test-rs in namespace replicaset-6179 with ReadyReplicas 1, AvailableReplicas 1 Mar 2 00:16:34.090: INFO: observed ReplicaSet test-rs in namespace replicaset-6179 with ReadyReplicas 2, AvailableReplicas 2 Mar 2 00:17:18.386: INFO: observed Replicaset test-rs in namespace replicaset-6179 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:18.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6179" for this suite. • [SLOW TEST:136.455 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":1,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:06.701: INFO: >>> kubeConfig: /tmp/kubeconfig-1867053252 STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test substitution in volume subpath Mar 2 00:17:06.725: INFO: Waiting up to 5m0s for pod "var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069" in namespace "var-expansion-7968" to be "Succeeded or Failed" Mar 2 00:17:06.729: INFO: Pod "var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064355ms Mar 2 00:17:08.732: INFO: Pod "var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006802365s Mar 2 00:17:10.736: INFO: Pod "var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010780548s Mar 2 00:17:12.742: INFO: Pod "var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017407177s Mar 2 00:17:14.746: INFO: Pod "var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020579189s Mar 2 00:17:16.750: INFO: Pod "var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024809188s Mar 2 00:17:18.753: INFO: Pod "var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028460398s STEP: Saw pod success Mar 2 00:17:18.753: INFO: Pod "var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069" satisfied condition "Succeeded or Failed" Mar 2 00:17:18.756: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069 container dapi-container: STEP: delete the pod Mar 2 00:17:18.770: INFO: Waiting for pod var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069 to disappear Mar 2 00:17:18.774: INFO: Pod var-expansion-4af53a0a-d01d-4ea8-9c9f-27ec4b34b069 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:18.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7968" for this suite. • [SLOW TEST:12.083 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:14.288: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:14.308: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 2 00:17:16.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 --namespace=crd-publish-openapi-4802 create -f -' Mar 2 00:17:16.614: INFO: stderr: "" Mar 2 00:17:16.614: INFO: stdout: "e2e-test-crd-publish-openapi-3156-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 2 00:17:16.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 --namespace=crd-publish-openapi-4802 delete e2e-test-crd-publish-openapi-3156-crds test-foo' Mar 2 00:17:16.661: INFO: stderr: "" Mar 2 00:17:16.661: INFO: stdout: "e2e-test-crd-publish-openapi-3156-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 2 00:17:16.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 --namespace=crd-publish-openapi-4802 apply -f -' Mar 2 00:17:16.777: INFO: stderr: "" Mar 2 00:17:16.777: INFO: stdout: "e2e-test-crd-publish-openapi-3156-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 2 00:17:16.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 --namespace=crd-publish-openapi-4802 delete e2e-test-crd-publish-openapi-3156-crds test-foo' Mar 2 00:17:16.825: INFO: stderr: "" Mar 2 00:17:16.825: INFO: stdout: "e2e-test-crd-publish-openapi-3156-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with value outside defined enum values Mar 2 00:17:16.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 --namespace=crd-publish-openapi-4802 create -f -' Mar 2 00:17:16.927: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 2 00:17:16.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 --namespace=crd-publish-openapi-4802 create -f -' Mar 2 00:17:17.023: INFO: rc: 1 Mar 2 00:17:17.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 --namespace=crd-publish-openapi-4802 apply -f -' Mar 2 00:17:17.121: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 2 00:17:17.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 --namespace=crd-publish-openapi-4802 create -f -' Mar 2 00:17:17.220: INFO: rc: 1 Mar 2 00:17:17.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 --namespace=crd-publish-openapi-4802 apply -f -' Mar 2 00:17:17.320: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 2 00:17:17.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 explain e2e-test-crd-publish-openapi-3156-crds' Mar 2 00:17:17.425: INFO: stderr: "" Mar 2 00:17:17.425: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3156-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 2 00:17:17.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 explain e2e-test-crd-publish-openapi-3156-crds.metadata' Mar 2 00:17:17.530: INFO: stderr: "" Mar 2 00:17:17.530: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3156-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 2 00:17:17.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 explain e2e-test-crd-publish-openapi-3156-crds.spec' Mar 2 00:17:17.635: INFO: stderr: "" Mar 2 00:17:17.635: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3156-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 2 00:17:17.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 explain e2e-test-crd-publish-openapi-3156-crds.spec.bars' Mar 2 00:17:17.743: INFO: stderr: "" Mar 2 00:17:17.743: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3156-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 2 00:17:17.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=crd-publish-openapi-4802 explain e2e-test-crd-publish-openapi-3156-crds.spec.bars2' Mar 2 00:17:17.849: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:19.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4802" for this suite. • [SLOW TEST:5.397 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":4,"skipped":52,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:44.898: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:16:44.917: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d" in namespace "projected-3034" to be "Succeeded or Failed" Mar 2 00:16:44.919: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.46042ms Mar 2 00:16:46.922: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004817954s Mar 2 00:16:48.925: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007824078s Mar 2 00:16:50.929: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011835882s Mar 2 00:16:52.931: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014563017s Mar 2 00:16:54.935: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01799567s Mar 2 00:16:56.938: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02142148s Mar 2 00:16:58.942: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.025155764s Mar 2 00:17:00.945: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.02781594s Mar 2 00:17:02.949: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.031565102s Mar 2 00:17:04.952: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.034852527s Mar 2 00:17:06.955: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.038232778s Mar 2 00:17:08.959: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.042210863s Mar 2 00:17:10.964: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.04744928s Mar 2 00:17:12.972: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.054627066s Mar 2 00:17:14.976: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.059096902s Mar 2 00:17:16.980: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.06281483s Mar 2 00:17:18.984: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.066907189s Mar 2 00:17:20.988: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.071520795s Mar 2 00:17:22.992: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.075318161s STEP: Saw pod success Mar 2 00:17:22.992: INFO: Pod "downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d" satisfied condition "Succeeded or Failed" Mar 2 00:17:22.995: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d container client-container: STEP: delete the pod Mar 2 00:17:23.009: INFO: Waiting for pod downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d to disappear Mar 2 00:17:23.014: INFO: Pod downwardapi-volume-5429d166-ccb9-4557-9c33-f89407dd287d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:23.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3034" for this suite. • [SLOW TEST:38.124 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:07.672: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-test-volume-340ad49d-09f4-4dbc-9a3c-9cc03ef361a3 STEP: Creating a pod to test consume configMaps Mar 2 00:17:07.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a" in namespace "configmap-5888" to be "Succeeded or Failed" Mar 2 00:17:07.717: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.496615ms Mar 2 00:17:09.720: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007535601s Mar 2 00:17:11.724: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011902156s Mar 2 00:17:13.728: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015430349s Mar 2 00:17:15.730: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018031422s Mar 2 00:17:17.734: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021733924s Mar 2 00:17:19.737: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025181431s Mar 2 00:17:21.741: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.028696706s Mar 2 00:17:23.744: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.031884489s STEP: Saw pod success Mar 2 00:17:23.744: INFO: Pod "pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a" satisfied condition "Succeeded or Failed" Mar 2 00:17:23.747: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a container agnhost-container: STEP: delete the pod Mar 2 00:17:23.760: INFO: Waiting for pod pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a to disappear Mar 2 00:17:23.764: INFO: Pod pod-configmaps-1beed4a9-b47c-4677-9152-b0e63968152a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:23.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5888" for this suite. • [SLOW TEST:16.099 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:11.781: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 2 00:17:11.806: INFO: Waiting up to 5m0s for pod "pod-69857aae-827d-4ab9-a679-d4b34a9d2075" in namespace "emptydir-6076" to be "Succeeded or Failed" Mar 2 00:17:11.809: INFO: Pod "pod-69857aae-827d-4ab9-a679-d4b34a9d2075": Phase="Pending", Reason="", readiness=false. Elapsed: 3.553054ms Mar 2 00:17:13.813: INFO: Pod "pod-69857aae-827d-4ab9-a679-d4b34a9d2075": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007362241s Mar 2 00:17:15.816: INFO: Pod "pod-69857aae-827d-4ab9-a679-d4b34a9d2075": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010821072s Mar 2 00:17:17.821: INFO: Pod "pod-69857aae-827d-4ab9-a679-d4b34a9d2075": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014896643s Mar 2 00:17:19.824: INFO: Pod "pod-69857aae-827d-4ab9-a679-d4b34a9d2075": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01862304s Mar 2 00:17:21.828: INFO: Pod "pod-69857aae-827d-4ab9-a679-d4b34a9d2075": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021874044s Mar 2 00:17:23.831: INFO: Pod "pod-69857aae-827d-4ab9-a679-d4b34a9d2075": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.02528462s STEP: Saw pod success Mar 2 00:17:23.831: INFO: Pod "pod-69857aae-827d-4ab9-a679-d4b34a9d2075" satisfied condition "Succeeded or Failed" Mar 2 00:17:23.834: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-69857aae-827d-4ab9-a679-d4b34a9d2075 container test-container: STEP: delete the pod Mar 2 00:17:23.852: INFO: Waiting for pod pod-69857aae-827d-4ab9-a679-d4b34a9d2075 to disappear Mar 2 00:17:23.857: INFO: Pod pod-69857aae-827d-4ab9-a679-d4b34a9d2075 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:23.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6076" for this suite. • [SLOW TEST:12.085 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:04.835: INFO: >>> kubeConfig: /tmp/kubeconfig-1801756564 STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:24.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1335" for this suite. • [SLOW TEST:20.063 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":67,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:24.905: INFO: >>> kubeConfig: /tmp/kubeconfig-1801756564 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Starting the proxy Mar 2 00:17:24.920: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1801756564 --namespace=kubectl-2859 proxy --unix-socket=/tmp/kubectl-proxy-unix1118949388/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:24.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2859" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":6,"skipped":78,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:26.017: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1573 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Mar 2 00:16:26.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-578814310 --namespace=kubectl-110 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Mar 2 00:16:26.079: INFO: stderr: "" Mar 2 00:16:26.079: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 2 00:17:01.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-578814310 --namespace=kubectl-110 get pod e2e-test-httpd-pod -o json' Mar 2 00:17:01.182: INFO: stderr: "" Mar 2 00:17:01.182: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-03-02T00:16:26Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-110\",\n \"resourceVersion\": \"8042\",\n \"uid\": \"5b088d27-1dac-4810-9a71-1bdad5952f29\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-npgr5\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k3s-agent-1-mid5l7\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-npgr5\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-02T00:16:26Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-02T00:16:27Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-02T00:16:27Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-03-02T00:16:26Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://813d3840e530a2ba2044ffe9136c67cf946723ffeb6a96a6ac2dbf30624b73e6\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-03-02T00:16:27Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.42.1.118\",\n \"podIPs\": [\n {\n \"ip\": \"10.42.1.118\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-03-02T00:16:26Z\"\n }\n}\n" STEP: replace the image in the pod Mar 2 00:17:01.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-578814310 --namespace=kubectl-110 replace -f -' Mar 2 00:17:01.608: INFO: stderr: "" Mar 2 00:17:01.608: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 Mar 2 00:17:01.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-578814310 --namespace=kubectl-110 delete pods e2e-test-httpd-pod' Mar 2 00:17:26.189: INFO: stderr: "" Mar 2 00:17:26.189: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:26.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-110" for this suite. • [SLOW TEST:60.180 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1570 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":2,"skipped":70,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:02.087: INFO: >>> kubeConfig: /tmp/kubeconfig-343363664 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name secret-test-fd2721cc-7f0d-45fd-9b1a-fd9055ce12d0 STEP: Creating a pod to test consume secrets Mar 2 00:17:02.130: INFO: Waiting up to 5m0s for pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d" in namespace "secrets-7712" to be "Succeeded or Failed" Mar 2 00:17:02.135: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574913ms Mar 2 00:17:04.139: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008796578s Mar 2 00:17:06.143: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012812522s Mar 2 00:17:08.149: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019076101s Mar 2 00:17:10.152: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021825078s Mar 2 00:17:12.157: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026814898s Mar 2 00:17:14.161: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031364321s Mar 2 00:17:16.165: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.035067799s Mar 2 00:17:18.169: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.039312883s Mar 2 00:17:20.176: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.046119448s Mar 2 00:17:22.180: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.049793547s Mar 2 00:17:24.185: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.054826263s Mar 2 00:17:26.188: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057878309s STEP: Saw pod success Mar 2 00:17:26.188: INFO: Pod "pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d" satisfied condition "Succeeded or Failed" Mar 2 00:17:26.191: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d container secret-volume-test: STEP: delete the pod Mar 2 00:17:26.207: INFO: Waiting for pod pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d to disappear Mar 2 00:17:26.212: INFO: Pod pod-secrets-07b73638-9dd3-46bf-9553-ec74859cdf6d no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:26.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7712" for this suite. • [SLOW TEST:24.135 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":108,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:37.210: INFO: >>> kubeConfig: /tmp/kubeconfig-2945017463 STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:37.240: INFO: created pod Mar 2 00:16:37.240: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-336" to be "Succeeded or Failed" Mar 2 00:16:37.243: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.917677ms Mar 2 00:16:39.246: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006121069s Mar 2 00:16:41.249: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009113811s Mar 2 00:16:43.252: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011692317s Mar 2 00:16:45.256: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015574607s Mar 2 00:16:47.259: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.019082265s Mar 2 00:16:49.262: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022241362s Mar 2 00:16:51.266: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026190518s Mar 2 00:16:53.270: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 16.030311259s Mar 2 00:16:55.274: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 18.03399471s Mar 2 00:16:57.278: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.037745238s STEP: Saw pod success Mar 2 00:16:57.278: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Mar 2 00:17:27.281: INFO: polling logs Mar 2 00:17:27.289: INFO: Pod logs: I0302 00:16:38.659730 1 log.go:195] OK: Got token I0302 00:16:38.659751 1 log.go:195] validating with in-cluster discovery I0302 00:16:38.659973 1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local I0302 00:16:38.659988 1 log.go:195] Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-336:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1677716798, NotBefore:1677716198, IssuedAt:1677716198, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-336", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"f4f98e5b-c440-484a-80dd-b70b9ec04312"}}} I0302 00:16:38.663765 1 log.go:195] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local I0302 00:16:38.666453 1 log.go:195] OK: Validated signature on JWT I0302 00:16:38.666569 1 log.go:195] OK: Got valid claims from token! I0302 00:16:38.666592 1 log.go:195] Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-336:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1677716798, NotBefore:1677716198, IssuedAt:1677716198, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-336", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"f4f98e5b-c440-484a-80dd-b70b9ec04312"}}} Mar 2 00:17:27.289: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:27.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-336" for this suite. • [SLOW TEST:50.093 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:39.091: INFO: >>> kubeConfig: /tmp/kubeconfig-1605033327 STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod liveness-1297b8b6-05f0-4515-aeb3-311de8754101 in namespace container-probe-5335 Mar 2 00:17:03.132: INFO: Started pod liveness-1297b8b6-05f0-4515-aeb3-311de8754101 in namespace container-probe-5335 STEP: checking the pod's current state and verifying that restartCount is present Mar 2 00:17:03.135: INFO: Initial restart count of pod liveness-1297b8b6-05f0-4515-aeb3-311de8754101 is 0 Mar 2 00:17:29.193: INFO: Restart count of pod container-probe-5335/liveness-1297b8b6-05f0-4515-aeb3-311de8754101 is now 2 (26.057391667s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:29.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5335" for this suite. • [SLOW TEST:50.117 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:12.314: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Mar 2 00:17:12.628: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:17:12.669: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:17:14.677: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:16.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:18.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:20.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:22.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:24.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:26.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:28.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 12, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:17:31.691: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 2 00:17:31.705: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:31.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6699" for this suite. STEP: Destroying namespace "webhook-6699-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.484 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:51.107: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:16:51.129: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 2 00:16:56.139: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 2 00:17:06.153: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 2 00:17:08.158: INFO: Creating deployment "test-rollover-deployment" Mar 2 00:17:08.172: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 2 00:17:10.182: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 2 00:17:10.190: INFO: Ensure that both replica sets have 1 created replica Mar 2 00:17:10.197: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 2 00:17:10.207: INFO: Updating deployment test-rollover-deployment Mar 2 00:17:10.207: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 2 00:17:12.215: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 2 00:17:12.220: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 2 00:17:12.225: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:12.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:14.231: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:14.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:16.232: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:16.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:18.232: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:18.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:20.232: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:20.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:22.231: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:22.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:24.232: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:24.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:26.233: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:26.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:28.231: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:28.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:30.231: INFO: all replica sets need to contain the pod-template-hash label Mar 2 00:17:30.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:32.231: INFO: Mar 2 00:17:32.231: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Mar 2 00:17:32.239: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6777 2fa519f1-1d70-498a-b38f-2eb204b4cae9 9373 2 2023-03-02 00:17:08 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-03-02 00:17:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:17:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000a9c248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-03-02 00:17:08 +0000 UTC,LastTransitionTime:2023-03-02 00:17:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-77db6f9f48" has successfully progressed.,LastUpdateTime:2023-03-02 00:17:30 +0000 UTC,LastTransitionTime:2023-03-02 00:17:08 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 2 00:17:32.241: INFO: New ReplicaSet "test-rollover-deployment-77db6f9f48" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-77db6f9f48 deployment-6777 416ebe6d-0970-4f36-ae34-ad15c87be8d7 9364 2 2023-03-02 00:17:10 +0000 UTC map[name:rollover-pod pod-template-hash:77db6f9f48] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 2fa519f1-1d70-498a-b38f-2eb204b4cae9 0xc000a9c6e7 0xc000a9c6e8}] [] [{k3s Update apps/v1 2023-03-02 00:17:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2fa519f1-1d70-498a-b38f-2eb204b4cae9\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:17:30 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 77db6f9f48,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:77db6f9f48] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000a9c7b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:17:32.241: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 2 00:17:32.241: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6777 813dcf54-0156-434c-a620-a578479061e1 9372 2 2023-03-02 00:16:51 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 2fa519f1-1d70-498a-b38f-2eb204b4cae9 0xc000a9c7ff 0xc000a9c810}] [] [{e2e.test Update apps/v1 2023-03-02 00:16:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:17:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2fa519f1-1d70-498a-b38f-2eb204b4cae9\"}":{}}},"f:spec":{"f:replicas":{}}} } {k3s Update apps/v1 2023-03-02 00:17:30 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000a9c8d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:17:32.241: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-784bc44b77 deployment-6777 9b12ad65-4806-4afc-8569-7ab9f660dd29 8524 2 2023-03-02 00:17:08 +0000 UTC map[name:rollover-pod pod-template-hash:784bc44b77] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 2fa519f1-1d70-498a-b38f-2eb204b4cae9 0xc000a9c5c7 0xc000a9c5c8}] [] [{k3s Update apps/v1 2023-03-02 00:17:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2fa519f1-1d70-498a-b38f-2eb204b4cae9\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:17:10 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 784bc44b77,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:784bc44b77] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000a9c688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:17:32.244: INFO: Pod "test-rollover-deployment-77db6f9f48-v2hjb" is available: &Pod{ObjectMeta:{test-rollover-deployment-77db6f9f48-v2hjb test-rollover-deployment-77db6f9f48- deployment-6777 b1fbd7f8-e37d-4ece-8878-d50a52bf0d31 9038 0 2023-03-02 00:17:10 +0000 UTC map[name:rollover-pod pod-template-hash:77db6f9f48] map[] [{apps/v1 ReplicaSet test-rollover-deployment-77db6f9f48 416ebe6d-0970-4f36-ae34-ad15c87be8d7 0xc002ea3467 0xc002ea3468}] [] [{k3s Update v1 2023-03-02 00:17:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"416ebe6d-0970-4f36-ae34-ad15c87be8d7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:17:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.169\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kwzkj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwzkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:17:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:17:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:17:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.169,StartTime:2023-03-02 00:17:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:17:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://73caeb2f9a2ccd6f91d75265cf94195b0b3e9daf4e64454fbef223abfdaf720b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:32.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6777" for this suite. • [SLOW TEST:41.144 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":5,"skipped":147,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:16.655: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 2 00:17:16.679: INFO: Waiting up to 5m0s for pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa" in namespace "security-context-6403" to be "Succeeded or Failed" Mar 2 00:17:16.685: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249943ms Mar 2 00:17:18.688: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009171948s Mar 2 00:17:20.691: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012636092s Mar 2 00:17:22.696: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017353022s Mar 2 00:17:24.701: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021896823s Mar 2 00:17:26.704: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025752747s Mar 2 00:17:28.709: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02997709s Mar 2 00:17:30.713: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 14.034298155s Mar 2 00:17:32.716: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.037528089s STEP: Saw pod success Mar 2 00:17:32.716: INFO: Pod "security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa" satisfied condition "Succeeded or Failed" Mar 2 00:17:32.719: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa container test-container: STEP: delete the pod Mar 2 00:17:32.740: INFO: Waiting for pod security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa to disappear Mar 2 00:17:32.743: INFO: Pod security-context-c0e0329c-1698-4087-a2a8-66845a03c6fa no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:32.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6403" for this suite. • [SLOW TEST:16.097 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:02.116: INFO: >>> kubeConfig: /tmp/kubeconfig-2645107774 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 2 00:17:02.151: INFO: Waiting up to 5m0s for pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995" in namespace "emptydir-1041" to be "Succeeded or Failed" Mar 2 00:17:02.158: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 6.821868ms Mar 2 00:17:04.162: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0105261s Mar 2 00:17:06.165: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013734562s Mar 2 00:17:08.175: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023900999s Mar 2 00:17:10.179: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027356006s Mar 2 00:17:12.182: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031010943s Mar 2 00:17:14.186: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 12.034669195s Mar 2 00:17:16.189: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 14.037575228s Mar 2 00:17:18.192: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 16.041162504s Mar 2 00:17:20.198: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 18.047104168s Mar 2 00:17:22.202: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 20.051118843s Mar 2 00:17:24.206: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 22.054562313s Mar 2 00:17:26.210: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 24.058736429s Mar 2 00:17:28.214: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 26.062418399s Mar 2 00:17:30.218: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 28.06631143s Mar 2 00:17:32.222: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Pending", Reason="", readiness=false. Elapsed: 30.071190903s Mar 2 00:17:34.226: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.074876818s STEP: Saw pod success Mar 2 00:17:34.226: INFO: Pod "pod-efc93f13-68ea-4928-8707-57ac6cb1c995" satisfied condition "Succeeded or Failed" Mar 2 00:17:34.229: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-efc93f13-68ea-4928-8707-57ac6cb1c995 container test-container: STEP: delete the pod Mar 2 00:17:34.246: INFO: Waiting for pod pod-efc93f13-68ea-4928-8707-57ac6cb1c995 to disappear Mar 2 00:17:34.251: INFO: Pod pod-efc93f13-68ea-4928-8707-57ac6cb1c995 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:34.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1041" for this suite. • [SLOW TEST:32.144 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:16.862: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:16.887: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5" in namespace "security-context-test-5990" to be "Succeeded or Failed" Mar 2 00:17:16.891: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248574ms Mar 2 00:17:18.894: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006993364s Mar 2 00:17:20.897: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010838491s Mar 2 00:17:22.901: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014576916s Mar 2 00:17:24.904: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0171728s Mar 2 00:17:26.908: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020939145s Mar 2 00:17:28.912: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025011911s Mar 2 00:17:30.915: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.028583244s Mar 2 00:17:32.919: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.032608057s Mar 2 00:17:34.925: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.037920963s Mar 2 00:17:34.925: INFO: Pod "alpine-nnp-false-60d89268-ea12-4cca-838a-d9266b2da3e5" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:34.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5990" for this suite. • [SLOW TEST:18.077 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":146,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:14.891: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating projection with secret that has name projected-secret-test-map-8d7a7f19-a7a8-458d-8b55-926253131b33 STEP: Creating a pod to test consume secrets Mar 2 00:17:14.927: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc" in namespace "projected-9641" to be "Succeeded or Failed" Mar 2 00:17:14.931: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.661631ms Mar 2 00:17:16.935: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007366323s Mar 2 00:17:18.938: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011033184s Mar 2 00:17:20.943: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015972238s Mar 2 00:17:22.947: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019412756s Mar 2 00:17:24.950: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022184956s Mar 2 00:17:26.953: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025449468s Mar 2 00:17:28.958: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.030645226s Mar 2 00:17:30.962: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.034514595s Mar 2 00:17:32.966: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.03846686s Mar 2 00:17:34.972: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.044473514s STEP: Saw pod success Mar 2 00:17:34.972: INFO: Pod "pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc" satisfied condition "Succeeded or Failed" Mar 2 00:17:34.975: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc container projected-secret-volume-test: STEP: delete the pod Mar 2 00:17:35.007: INFO: Waiting for pod pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc to disappear Mar 2 00:17:35.013: INFO: Pod pod-projected-secrets-79d91a99-bdbc-4bce-ab04-2deaa348f0fc no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:35.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9641" for this suite. • [SLOW TEST:20.131 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:32.105: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Create set of pods Mar 2 00:16:32.121: INFO: created test-pod-1 Mar 2 00:16:32.124: INFO: created test-pod-2 Mar 2 00:16:32.128: INFO: created test-pod-3 STEP: waiting for all 3 pods to be running Mar 2 00:16:32.128: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-5664' to be running and ready Mar 2 00:16:32.139: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:32.139: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:32.139: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:32.139: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (0 seconds elapsed) Mar 2 00:16:32.139: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:32.139: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:32.139: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:32.139: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:32.139: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:32.139: INFO: Mar 2 00:16:34.152: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:34.152: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:34.152: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:34.152: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (2 seconds elapsed) Mar 2 00:16:34.152: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:34.152: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:34.152: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:34.152: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:34.152: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:34.152: INFO: Mar 2 00:16:36.150: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:36.150: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:36.150: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:36.150: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (4 seconds elapsed) Mar 2 00:16:36.150: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:36.150: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:36.150: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:36.150: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:36.150: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:36.150: INFO: Mar 2 00:16:38.156: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:38.156: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:38.156: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:38.156: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (6 seconds elapsed) Mar 2 00:16:38.156: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:38.156: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:38.156: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:38.156: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:38.156: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:38.156: INFO: Mar 2 00:16:40.155: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:40.155: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:40.155: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:40.155: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (8 seconds elapsed) Mar 2 00:16:40.155: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:40.155: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:40.155: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:40.155: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:40.155: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:40.155: INFO: Mar 2 00:16:42.149: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:42.149: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:42.149: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:42.149: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (10 seconds elapsed) Mar 2 00:16:42.149: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:42.149: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:42.149: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:42.149: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:42.149: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:42.149: INFO: Mar 2 00:16:44.149: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:44.149: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:44.149: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:44.149: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (12 seconds elapsed) Mar 2 00:16:44.149: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:44.149: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:44.149: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:44.149: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:44.149: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:44.149: INFO: Mar 2 00:16:46.149: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:46.150: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:46.150: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:46.150: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (14 seconds elapsed) Mar 2 00:16:46.150: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:46.150: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:46.150: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:46.150: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:46.150: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:46.150: INFO: Mar 2 00:16:48.150: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:48.150: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:48.150: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:48.150: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (16 seconds elapsed) Mar 2 00:16:48.150: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:48.150: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:48.150: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:48.150: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:48.150: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:48.150: INFO: Mar 2 00:16:50.150: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:50.150: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:50.150: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:50.150: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (18 seconds elapsed) Mar 2 00:16:50.150: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:50.150: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:50.150: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:50.150: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:50.150: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:50.150: INFO: Mar 2 00:16:52.152: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:52.152: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:52.152: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:52.152: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (20 seconds elapsed) Mar 2 00:16:52.152: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:52.152: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:52.152: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:52.152: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:52.152: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:52.152: INFO: Mar 2 00:16:54.151: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:54.151: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:54.151: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:54.151: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (22 seconds elapsed) Mar 2 00:16:54.151: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:54.151: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:54.151: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:54.151: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:54.151: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:54.151: INFO: Mar 2 00:16:56.193: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:56.193: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:56.193: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:56.193: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (24 seconds elapsed) Mar 2 00:16:56.193: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:56.193: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:56.193: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:56.193: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:56.193: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:56.193: INFO: Mar 2 00:16:58.151: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:58.151: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:58.151: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:16:58.151: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (26 seconds elapsed) Mar 2 00:16:58.151: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:16:58.151: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:58.151: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:58.151: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:58.151: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:16:58.151: INFO: Mar 2 00:17:00.150: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:00.150: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:00.150: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:00.150: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (28 seconds elapsed) Mar 2 00:17:00.150: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:17:00.150: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:00.150: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:00.150: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:00.150: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:00.150: INFO: Mar 2 00:17:02.159: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:02.159: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:02.159: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:02.159: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (30 seconds elapsed) Mar 2 00:17:02.159: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:17:02.159: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:02.159: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:02.159: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:02.159: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:02.159: INFO: Mar 2 00:17:04.151: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:04.151: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:04.151: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:04.151: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (32 seconds elapsed) Mar 2 00:17:04.151: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:17:04.151: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:04.151: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:04.151: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:04.151: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:04.151: INFO: Mar 2 00:17:06.151: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:06.152: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:06.152: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:06.152: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (34 seconds elapsed) Mar 2 00:17:06.152: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:17:06.152: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:06.152: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:06.152: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:06.152: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:06.152: INFO: Mar 2 00:17:08.156: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:08.156: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:08.156: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:08.156: INFO: 0 / 3 pods in namespace 'pods-5664' are running and ready (36 seconds elapsed) Mar 2 00:17:08.156: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:17:08.156: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:08.156: INFO: test-pod-1 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:08.156: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:08.156: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:08.156: INFO: Mar 2 00:17:10.151: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:10.151: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 2 00:17:10.151: INFO: 1 / 3 pods in namespace 'pods-5664' are running and ready (38 seconds elapsed) Mar 2 00:17:10.151: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. Mar 2 00:17:10.151: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:10.151: INFO: test-pod-2 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:10.151: INFO: test-pod-3 k3s-agent-1-mid5l7 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:32 +0000 UTC }] Mar 2 00:17:10.151: INFO: Mar 2 00:17:12.152: INFO: 3 / 3 pods in namespace 'pods-5664' are running and ready (40 seconds elapsed) Mar 2 00:17:12.152: INFO: expected 0 pod replicas in namespace 'pods-5664', 0 are Running and Ready. STEP: waiting for all pods to be deleted Mar 2 00:17:12.170: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:13.175: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:14.175: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:15.173: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:16.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:17.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:18.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:19.175: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:20.177: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:21.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:22.175: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:23.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:24.175: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:25.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:26.175: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:27.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:28.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:29.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:30.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:31.174: INFO: Pod quantity 3 is different from expected quantity 0 Mar 2 00:17:32.174: INFO: Pod quantity 2 is different from expected quantity 0 Mar 2 00:17:33.174: INFO: Pod quantity 1 is different from expected quantity 0 Mar 2 00:17:34.174: INFO: Pod quantity 1 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:35.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5664" for this suite. • [SLOW TEST:63.080 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:19.691: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-test-volume-map-d4b3098c-c28c-43ab-942a-1fb51f9c8dfe STEP: Creating a pod to test consume configMaps Mar 2 00:17:19.720: INFO: Waiting up to 5m0s for pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb" in namespace "configmap-8301" to be "Succeeded or Failed" Mar 2 00:17:19.722: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.767483ms Mar 2 00:17:21.726: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006270766s Mar 2 00:17:23.729: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009160613s Mar 2 00:17:25.734: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014227932s Mar 2 00:17:27.738: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017933416s Mar 2 00:17:29.741: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021352323s Mar 2 00:17:31.758: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.038273902s Mar 2 00:17:33.762: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.042022444s Mar 2 00:17:35.765: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.045618144s Mar 2 00:17:37.773: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.053113146s STEP: Saw pod success Mar 2 00:17:37.773: INFO: Pod "pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb" satisfied condition "Succeeded or Failed" Mar 2 00:17:37.777: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb container agnhost-container: STEP: delete the pod Mar 2 00:17:37.796: INFO: Waiting for pod pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb to disappear Mar 2 00:17:37.800: INFO: Pod pod-configmaps-70fbe8c1-c262-4a09-a532-f6068eff89bb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:37.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8301" for this suite. • [SLOW TEST:18.117 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":59,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:32.254: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:32.269: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 2 00:17:34.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-8541 --namespace=crd-publish-openapi-8541 create -f -' Mar 2 00:17:35.263: INFO: stderr: "" Mar 2 00:17:35.263: INFO: stdout: "e2e-test-crd-publish-openapi-9528-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 2 00:17:35.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-8541 --namespace=crd-publish-openapi-8541 delete e2e-test-crd-publish-openapi-9528-crds test-cr' Mar 2 00:17:35.314: INFO: stderr: "" Mar 2 00:17:35.314: INFO: stdout: "e2e-test-crd-publish-openapi-9528-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 2 00:17:35.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-8541 --namespace=crd-publish-openapi-8541 apply -f -' Mar 2 00:17:35.448: INFO: stderr: "" Mar 2 00:17:35.449: INFO: stdout: "e2e-test-crd-publish-openapi-9528-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 2 00:17:35.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-8541 --namespace=crd-publish-openapi-8541 delete e2e-test-crd-publish-openapi-9528-crds test-cr' Mar 2 00:17:35.506: INFO: stderr: "" Mar 2 00:17:35.506: INFO: stdout: "e2e-test-crd-publish-openapi-9528-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 2 00:17:35.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-8541 explain e2e-test-crd-publish-openapi-9528-crds' Mar 2 00:17:35.611: INFO: stderr: "" Mar 2 00:17:35.611: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9528-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:38.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8541" for this suite. • [SLOW TEST:6.089 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":6,"skipped":151,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:26.229: INFO: >>> kubeConfig: /tmp/kubeconfig-343363664 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating secret secrets-8985/secret-test-a3629af1-0029-4ed0-9451-eade7dba7d69 STEP: Creating a pod to test consume secrets Mar 2 00:17:26.262: INFO: Waiting up to 5m0s for pod "pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c" in namespace "secrets-8985" to be "Succeeded or Failed" Mar 2 00:17:26.265: INFO: Pod "pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896087ms Mar 2 00:17:28.268: INFO: Pod "pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006195963s Mar 2 00:17:30.275: INFO: Pod "pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013301832s Mar 2 00:17:32.281: INFO: Pod "pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019334344s Mar 2 00:17:34.291: INFO: Pod "pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029270012s Mar 2 00:17:36.296: INFO: Pod "pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.034278767s Mar 2 00:17:38.300: INFO: Pod "pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.038186061s STEP: Saw pod success Mar 2 00:17:38.300: INFO: Pod "pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c" satisfied condition "Succeeded or Failed" Mar 2 00:17:38.302: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c container env-test: STEP: delete the pod Mar 2 00:17:38.317: INFO: Waiting for pod pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c to disappear Mar 2 00:17:38.325: INFO: Pod pod-configmaps-714ac7ba-1bcb-4fe8-a5f7-d7d1e641e26c no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:38.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8985" for this suite. • [SLOW TEST:12.118 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:35.200: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:35.224: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:38.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7113" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":3,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:14.940: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Mar 2 00:17:15.377: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:17:15.393: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:17:17.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:19.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:21.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:23.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:25.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:27.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:29.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:31.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:33.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:17:36.456: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:36.460: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Registering the mutating webhook for custom resource e2e-test-webhook-260-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:39.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-604" for this suite. STEP: Destroying namespace "webhook-604-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.676 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":6,"skipped":148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:16.181: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name cm-test-opt-del-6a8fc72e-2154-4e97-a1e5-df2370127410 STEP: Creating configMap with name cm-test-opt-upd-620038ee-572e-48b3-a536-cc7586dda718 STEP: Creating the pod Mar 2 00:16:16.210: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:18.214: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:20.213: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:22.214: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:24.212: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:26.213: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:28.215: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:30.213: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:32.214: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:34.213: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:36.213: INFO: The status of Pod pod-projected-configmaps-a563ca20-529b-4b4c-90cc-fe14dd476acc is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-6a8fc72e-2154-4e97-a1e5-df2370127410 STEP: Updating configmap cm-test-opt-upd-620038ee-572e-48b3-a536-cc7586dda718 STEP: Creating configMap with name cm-test-opt-create-7151eb12-b406-48eb-89a4-7c1c01e7f535 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:40.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2596" for this suite. • [SLOW TEST:84.406 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":60,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:23.023: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 2 00:17:23.037: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 2 00:17:31.458: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 Mar 2 00:17:33.298: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:45.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9619" for this suite. • [SLOW TEST:22.601 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":4,"skipped":60,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:45.629: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:45.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-888" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":5,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:38.375: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 2 00:17:44.454: INFO: 80 pods remaining Mar 2 00:17:44.454: INFO: 80 pods has nil DeletionTimestamp Mar 2 00:17:44.454: INFO: Mar 2 00:17:45.432: INFO: 71 pods remaining Mar 2 00:17:45.432: INFO: 71 pods has nil DeletionTimestamp Mar 2 00:17:45.432: INFO: Mar 2 00:17:46.429: INFO: 60 pods remaining Mar 2 00:17:46.429: INFO: 60 pods has nil DeletionTimestamp Mar 2 00:17:46.429: INFO: Mar 2 00:17:47.429: INFO: 40 pods remaining Mar 2 00:17:47.429: INFO: 40 pods has nil DeletionTimestamp Mar 2 00:17:47.429: INFO: Mar 2 00:17:48.432: INFO: 31 pods remaining Mar 2 00:17:48.432: INFO: 31 pods has nil DeletionTimestamp Mar 2 00:17:48.432: INFO: Mar 2 00:17:49.428: INFO: 20 pods remaining Mar 2 00:17:49.428: INFO: 20 pods has nil DeletionTimestamp Mar 2 00:17:49.428: INFO: STEP: Gathering metrics W0302 00:17:50.431573 210 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. Mar 2 00:17:50.431: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:50.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5484" for this suite. • [SLOW TEST:12.063 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":7,"skipped":216,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:50.444: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:50.463: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 2 00:17:52.490: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:53.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9200" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":8,"skipped":225,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:39.644: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:17:56.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8386" for this suite. • [SLOW TEST:17.076 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":7,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:05.039: INFO: >>> kubeConfig: /tmp/kubeconfig-178066528 STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test service account token: Mar 2 00:17:05.061: INFO: Waiting up to 5m0s for pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108" in namespace "svcaccounts-6118" to be "Succeeded or Failed" Mar 2 00:17:05.065: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 3.253305ms Mar 2 00:17:07.068: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006632144s Mar 2 00:17:09.072: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010130508s Mar 2 00:17:11.076: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014000578s Mar 2 00:17:13.081: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019593066s Mar 2 00:17:15.086: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024291005s Mar 2 00:17:17.089: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02765341s Mar 2 00:17:19.093: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031179132s Mar 2 00:17:21.097: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 16.035211917s Mar 2 00:17:23.101: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 18.039219252s Mar 2 00:17:25.105: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 20.043264148s Mar 2 00:17:27.109: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 22.047614112s Mar 2 00:17:29.114: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 24.052839857s Mar 2 00:17:31.118: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 26.056543753s Mar 2 00:17:33.122: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 28.0599963s Mar 2 00:17:35.125: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 30.06363499s Mar 2 00:17:37.130: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 32.068568105s Mar 2 00:17:39.133: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 34.071765264s Mar 2 00:17:41.140: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 36.07846447s Mar 2 00:17:43.144: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 38.082819104s Mar 2 00:17:45.148: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 40.086971803s Mar 2 00:17:47.153: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 42.091913929s Mar 2 00:17:49.158: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 44.096167146s Mar 2 00:17:51.167: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 46.105406728s Mar 2 00:17:53.172: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 48.11018564s Mar 2 00:17:55.182: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 50.120022963s Mar 2 00:17:57.186: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 52.124675502s Mar 2 00:17:59.190: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Pending", Reason="", readiness=false. Elapsed: 54.128049574s Mar 2 00:18:01.195: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 56.133480578s STEP: Saw pod success Mar 2 00:18:01.195: INFO: Pod "test-pod-7b5fab90-ef68-4609-be60-3906e31a8108" satisfied condition "Succeeded or Failed" Mar 2 00:18:01.198: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod test-pod-7b5fab90-ef68-4609-be60-3906e31a8108 container agnhost-container: STEP: delete the pod Mar 2 00:18:01.216: INFO: Waiting for pod test-pod-7b5fab90-ef68-4609-be60-3906e31a8108 to disappear Mar 2 00:18:01.221: INFO: Pod test-pod-7b5fab90-ef68-4609-be60-3906e31a8108 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:01.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6118" for this suite. • [SLOW TEST:56.190 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:37.818: INFO: >>> kubeConfig: /tmp/kubeconfig-2095741199 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name s-test-opt-del-62ce5dd2-fa77-4a00-8d03-a91c5536fe3b STEP: Creating secret with name s-test-opt-upd-0e842d35-9bd8-4367-840e-95bec51cc216 STEP: Creating the pod Mar 2 00:16:37.860: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:39.866: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:41.865: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:43.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:45.865: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:47.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:49.865: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:51.865: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:53.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:55.866: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:57.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:59.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:01.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:03.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:05.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:07.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:09.863: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:11.865: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:13.864: INFO: The status of Pod pod-projected-secrets-9379f1aa-bddc-44fc-af7a-01eaa1803148 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-62ce5dd2-fa77-4a00-8d03-a91c5536fe3b STEP: Updating secret s-test-opt-upd-0e842d35-9bd8-4367-840e-95bec51cc216 STEP: Creating secret with name s-test-opt-create-764e6a39-0bab-4b97-a082-51644d9e1dba STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:02.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8922" for this suite. • [SLOW TEST:84.335 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:02.166: INFO: >>> kubeConfig: /tmp/kubeconfig-2095741199 STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Request ServerVersion STEP: Confirm major version Mar 2 00:18:02.181: INFO: Major version: 1 STEP: Confirm minor version Mar 2 00:18:02.181: INFO: cleanMinorVersion: 23 Mar 2 00:18:02.181: INFO: Minor version: 23 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:02.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-3743" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":3,"skipped":57,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:15.871: INFO: >>> kubeConfig: /tmp/kubeconfig-3747375592 STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test override arguments Mar 2 00:17:15.896: INFO: Waiting up to 5m0s for pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c" in namespace "containers-4409" to be "Succeeded or Failed" Mar 2 00:17:15.900: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286546ms Mar 2 00:17:17.903: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007651298s Mar 2 00:17:19.907: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011471073s Mar 2 00:17:21.911: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015777881s Mar 2 00:17:23.917: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021519172s Mar 2 00:17:25.921: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025069182s Mar 2 00:17:27.925: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029443966s Mar 2 00:17:29.929: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033172002s Mar 2 00:17:31.933: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036951305s Mar 2 00:17:33.937: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.04095376s Mar 2 00:17:35.940: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.044779288s Mar 2 00:17:37.949: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.053917087s Mar 2 00:17:39.954: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.058566086s Mar 2 00:17:41.957: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.061741035s Mar 2 00:17:43.977: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.081336022s Mar 2 00:17:45.984: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.08860005s Mar 2 00:17:47.989: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.093832681s Mar 2 00:17:49.995: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.099390469s Mar 2 00:17:51.998: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 36.102926327s Mar 2 00:17:54.002: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.106906035s Mar 2 00:17:56.008: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.112137579s Mar 2 00:17:58.011: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.11501859s Mar 2 00:18:00.014: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.118710718s Mar 2 00:18:02.019: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.123495399s Mar 2 00:18:04.022: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.126402565s STEP: Saw pod success Mar 2 00:18:04.022: INFO: Pod "client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c" satisfied condition "Succeeded or Failed" Mar 2 00:18:04.024: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c container agnhost-container: STEP: delete the pod Mar 2 00:18:04.039: INFO: Waiting for pod client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c to disappear Mar 2 00:18:04.045: INFO: Pod client-containers-51e0e0bc-78a8-4843-a3bf-fc47b453ce0c no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:04.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4409" for this suite. • [SLOW TEST:48.182 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":102,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:23.774: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Mar 2 00:17:23.787: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Registering the sample API server. Mar 2 00:17:24.070: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 2 00:17:26.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:28.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:30.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:32.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:34.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:36.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:38.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:40.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:42.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:44.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:46.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:48.111: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:50.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:52.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:54.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:56.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:58.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:00.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:02.111: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:04.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:06.331: INFO: Waited 217.37583ms for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Mar 2 00:18:06.370: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:07.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8504" for this suite. • [SLOW TEST:43.350 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:01.247: INFO: >>> kubeConfig: /tmp/kubeconfig-178066528 STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:07.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-7017" for this suite. • [SLOW TEST:6.083 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":5,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:26.201: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name projected-secret-test-df4f2397-c7d1-4395-877c-38e0ab99aae1 STEP: Creating a pod to test consume secrets Mar 2 00:17:26.238: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04" in namespace "projected-2190" to be "Succeeded or Failed" Mar 2 00:17:26.242: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 3.641673ms Mar 2 00:17:28.245: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007422731s Mar 2 00:17:30.248: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010483562s Mar 2 00:17:32.252: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014521307s Mar 2 00:17:34.256: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018088928s Mar 2 00:17:36.261: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022984438s Mar 2 00:17:38.265: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026807942s Mar 2 00:17:40.279: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 14.040718552s Mar 2 00:17:42.284: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 16.046372709s Mar 2 00:17:44.290: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 18.052458313s Mar 2 00:17:46.294: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 20.056555683s Mar 2 00:17:48.303: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 22.064754646s Mar 2 00:17:50.307: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 24.068772101s Mar 2 00:17:52.311: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 26.072803731s Mar 2 00:17:54.314: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 28.075658886s Mar 2 00:17:56.321: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 30.082741024s Mar 2 00:17:58.329: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 32.091085457s Mar 2 00:18:00.334: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 34.095675652s Mar 2 00:18:02.338: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 36.099688338s Mar 2 00:18:04.340: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 38.10219722s Mar 2 00:18:06.344: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 40.106051189s Mar 2 00:18:08.348: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Pending", Reason="", readiness=false. Elapsed: 42.109948522s Mar 2 00:18:10.352: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.113867353s STEP: Saw pod success Mar 2 00:18:10.352: INFO: Pod "pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04" satisfied condition "Succeeded or Failed" Mar 2 00:18:10.354: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04 container secret-volume-test: STEP: delete the pod Mar 2 00:18:10.369: INFO: Waiting for pod pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04 to disappear Mar 2 00:18:10.373: INFO: Pod pod-projected-secrets-a8de04f4-660b-4dc8-93f1-44acfa1cfb04 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:10.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2190" for this suite. • [SLOW TEST:44.179 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":75,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:12.621: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:12.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9454" for this suite. • [SLOW TEST:60.103 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":67,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:27.310: INFO: >>> kubeConfig: /tmp/kubeconfig-2945017463 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:27.326: INFO: >>> kubeConfig: /tmp/kubeconfig-2945017463 STEP: creating the pod STEP: submitting the pod to kubernetes Mar 2 00:17:27.340: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:29.344: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:31.347: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:33.345: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:35.353: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:37.346: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:39.345: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:41.344: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:43.344: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:45.343: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:47.346: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:49.343: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:51.351: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:53.344: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:55.345: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:57.344: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:59.343: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:01.346: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:03.343: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:05.344: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:07.343: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:09.343: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:11.343: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:13.345: INFO: The status of Pod pod-logs-websocket-126aa381-4bae-4949-9bcc-8c2ab138ae7b is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:13.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4282" for this suite. • [SLOW TEST:46.054 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":56,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:28.198: INFO: >>> kubeConfig: /tmp/kubeconfig-3780238515 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1234 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1234 I0302 00:16:28.236675 199 runners.go:193] Created replication controller with name: externalname-service, namespace: services-1234, replica count: 2 I0302 00:16:31.289845 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:34.290923 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:37.293011 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:40.293841 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:43.294920 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:46.297171 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:49.298287 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:52.298892 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:55.299606 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:58.299812 199 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:16:58.299: INFO: Creating new exec pod Mar 2 00:18:13.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3780238515 --namespace=services-1234 exec execpod9gcw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Mar 2 00:18:13.430: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Mar 2 00:18:13.430: INFO: stdout: "externalname-service-4b4qg" Mar 2 00:18:13.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3780238515 --namespace=services-1234 exec execpod9gcw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.233.225 80' Mar 2 00:18:13.519: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.233.225 80\nConnection to 10.43.233.225 80 port [tcp/http] succeeded!\n" Mar 2 00:18:13.519: INFO: stdout: "externalname-service-4b4qg" Mar 2 00:18:13.519: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:13.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1234" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:105.352 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:13.554: INFO: >>> kubeConfig: /tmp/kubeconfig-3780238515 STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Mar 2 00:18:13.578: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Mar 2 00:18:13.583: INFO: starting watch STEP: patching STEP: updating Mar 2 00:18:13.593: INFO: waiting for watch events with expected annotations Mar 2 00:18:13.593: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:13.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5282" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:04.067: INFO: >>> kubeConfig: /tmp/kubeconfig-3747375592 STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 2 00:18:04.084: INFO: >>> kubeConfig: /tmp/kubeconfig-3747375592 Mar 2 00:18:07.106: INFO: >>> kubeConfig: /tmp/kubeconfig-3747375592 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:14.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6672" for this suite. • [SLOW TEST:10.886 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":4,"skipped":132,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:02.780: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-test-volume-151fbe49-cb98-4513-98c5-bea6e4bd29a5 STEP: Creating a pod to test consume configMaps Mar 2 00:17:02.809: INFO: Waiting up to 5m0s for pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851" in namespace "configmap-3040" to be "Succeeded or Failed" Mar 2 00:17:02.811: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.46049ms Mar 2 00:17:04.814: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005611125s Mar 2 00:17:06.818: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008951029s Mar 2 00:17:08.821: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012116551s Mar 2 00:17:10.824: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015373956s Mar 2 00:17:12.834: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024908976s Mar 2 00:17:14.839: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029782778s Mar 2 00:17:16.841: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 14.032605667s Mar 2 00:17:18.845: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036639323s Mar 2 00:17:20.851: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 18.0418433s Mar 2 00:17:22.855: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 20.046415105s Mar 2 00:17:24.860: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 22.051236875s Mar 2 00:17:26.864: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 24.054687731s Mar 2 00:17:28.867: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 26.057907778s Mar 2 00:17:30.871: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 28.062132631s Mar 2 00:17:32.876: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 30.066931543s Mar 2 00:17:34.881: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 32.072279456s Mar 2 00:17:36.889: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 34.08010994s Mar 2 00:17:38.894: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 36.084732534s Mar 2 00:17:40.910: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 38.101317924s Mar 2 00:17:42.915: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 40.106588335s Mar 2 00:17:44.920: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 42.110743738s Mar 2 00:17:46.925: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 44.116097986s Mar 2 00:17:48.938: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 46.128798937s Mar 2 00:17:50.942: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 48.132684805s Mar 2 00:17:52.945: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 50.135824405s Mar 2 00:17:54.948: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 52.13949991s Mar 2 00:17:56.952: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 54.143250748s Mar 2 00:17:58.956: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 56.14762722s Mar 2 00:18:00.961: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 58.152515093s Mar 2 00:18:02.965: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.156415841s Mar 2 00:18:04.969: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.159859738s Mar 2 00:18:06.972: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.163540021s Mar 2 00:18:08.976: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.167164639s Mar 2 00:18:10.980: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.1713244s Mar 2 00:18:12.983: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.174374514s Mar 2 00:18:14.987: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.1780648s STEP: Saw pod success Mar 2 00:18:14.987: INFO: Pod "pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851" satisfied condition "Succeeded or Failed" Mar 2 00:18:14.992: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851 container configmap-volume-test: STEP: delete the pod Mar 2 00:18:15.017: INFO: Waiting for pod pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851 to disappear Mar 2 00:18:15.022: INFO: Pod pod-configmaps-bdc50b63-74be-4e40-930c-8f18cc8b5851 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:15.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3040" for this suite. • [SLOW TEST:72.250 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:15.044: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 2 00:18:15.070: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9098 c66c89ad-08d5-4066-87f4-ecfb8b5c0eb8 11086 0 2023-03-02 00:18:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-02 00:18:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:18:15.070: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9098 c66c89ad-08d5-4066-87f4-ecfb8b5c0eb8 11089 0 2023-03-02 00:18:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-02 00:18:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 2 00:18:15.082: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9098 c66c89ad-08d5-4066-87f4-ecfb8b5c0eb8 11090 0 2023-03-02 00:18:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-02 00:18:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:18:15.082: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9098 c66c89ad-08d5-4066-87f4-ecfb8b5c0eb8 11091 0 2023-03-02 00:18:15 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-03-02 00:18:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:15.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9098" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":3,"skipped":144,"failed":0} S ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:15.091: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 2 00:18:15.124: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Mar 2 00:18:15.128: INFO: starting watch STEP: patching STEP: updating Mar 2 00:18:15.138: INFO: waiting for watch events with expected annotations Mar 2 00:18:15.138: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:15.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9421" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":4,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:16.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1252640407 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating service in namespace services-4876 STEP: creating service affinity-nodeport in namespace services-4876 STEP: creating replication controller affinity-nodeport in namespace services-4876 I0302 00:16:16.078270 280 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-4876, replica count: 3 I0302 00:16:19.129302 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:22.129843 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:25.130901 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:28.132978 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:31.135235 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:34.135634 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:37.135886 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:40.137194 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:43.138880 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:46.139442 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:49.139684 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:52.141780 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:55.142812 280 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:16:55.150: INFO: Creating new exec pod Mar 2 00:17:16.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1252640407 --namespace=services-4876 exec execpod-affinity78rf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Mar 2 00:17:16.271: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Mar 2 00:17:16.271: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:17:16.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1252640407 --namespace=services-4876 exec execpod-affinity78rf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.2.233 80' Mar 2 00:17:16.419: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.2.233 80\nConnection to 10.43.2.233 80 port [tcp/http] succeeded!\n" Mar 2 00:17:16.419: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:17:16.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1252640407 --namespace=services-4876 exec execpod-affinity78rf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.12 30129' Mar 2 00:17:16.490: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.12 30129\nConnection to 172.17.0.12 30129 port [tcp/*] succeeded!\n" Mar 2 00:17:16.490: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:17:16.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1252640407 --namespace=services-4876 exec execpod-affinity78rf4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.8 30129' Mar 2 00:17:16.561: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.8 30129\nConnection to 172.17.0.8 30129 port [tcp/*] succeeded!\n" Mar 2 00:17:16.561: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:17:16.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1252640407 --namespace=services-4876 exec execpod-affinity78rf4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.12:30129/ ; done' Mar 2 00:17:16.678: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30129/\n" Mar 2 00:17:16.678: INFO: stdout: "\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx\naffinity-nodeport-n64cx" Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Received response from host: affinity-nodeport-n64cx Mar 2 00:17:16.678: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-4876, will wait for the garbage collector to delete the pods Mar 2 00:17:16.757: INFO: Deleting ReplicationController affinity-nodeport took: 4.733916ms Mar 2 00:17:16.858: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.652591ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:21.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4876" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:125.441 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:29.237: INFO: >>> kubeConfig: /tmp/kubeconfig-1605033327 STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:29.249: INFO: Creating pod... Mar 2 00:17:29.263: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:30.266: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:31.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:32.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:33.266: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:34.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:35.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:36.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:37.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:38.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:39.269: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:40.278: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:41.269: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:42.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:43.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:44.275: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:45.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:46.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:47.271: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:48.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:49.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:50.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:51.271: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:52.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:53.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:54.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:55.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:56.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:57.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:58.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:17:59.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:00.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:01.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:02.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:03.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:04.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:05.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:06.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:07.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:08.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:09.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:10.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:11.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:12.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:13.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:14.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:15.268: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:16.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:17.266: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:18.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:19.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:20.270: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:21.267: INFO: Pod Quantity: 1 Status: Pending Mar 2 00:18:22.268: INFO: Pod Status: Running Mar 2 00:18:22.268: INFO: Creating service... Mar 2 00:18:22.281: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/pods/agnhost/proxy/some/path/with/DELETE Mar 2 00:18:22.286: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Mar 2 00:18:22.286: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/pods/agnhost/proxy/some/path/with/GET Mar 2 00:18:22.289: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Mar 2 00:18:22.289: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/pods/agnhost/proxy/some/path/with/HEAD Mar 2 00:18:22.292: INFO: http.Client request:HEAD | StatusCode:200 Mar 2 00:18:22.292: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/pods/agnhost/proxy/some/path/with/OPTIONS Mar 2 00:18:22.295: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Mar 2 00:18:22.295: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/pods/agnhost/proxy/some/path/with/PATCH Mar 2 00:18:22.298: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Mar 2 00:18:22.298: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/pods/agnhost/proxy/some/path/with/POST Mar 2 00:18:22.301: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Mar 2 00:18:22.301: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/pods/agnhost/proxy/some/path/with/PUT Mar 2 00:18:22.303: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Mar 2 00:18:22.303: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/services/test-service/proxy/some/path/with/DELETE Mar 2 00:18:22.308: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Mar 2 00:18:22.308: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/services/test-service/proxy/some/path/with/GET Mar 2 00:18:22.312: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Mar 2 00:18:22.312: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/services/test-service/proxy/some/path/with/HEAD Mar 2 00:18:22.317: INFO: http.Client request:HEAD | StatusCode:200 Mar 2 00:18:22.317: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/services/test-service/proxy/some/path/with/OPTIONS Mar 2 00:18:22.321: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Mar 2 00:18:22.321: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/services/test-service/proxy/some/path/with/PATCH Mar 2 00:18:22.325: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Mar 2 00:18:22.325: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/services/test-service/proxy/some/path/with/POST Mar 2 00:18:22.330: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Mar 2 00:18:22.330: INFO: Starting http.Client for https://10.43.0.1:443/api/v1/namespaces/proxy-2170/services/test-service/proxy/some/path/with/PUT Mar 2 00:18:22.334: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:22.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2170" for this suite. • [SLOW TEST:53.105 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":5,"skipped":79,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.747: INFO: >>> kubeConfig: /tmp/kubeconfig-3654524011 STEP: Building a namespace api object, basename job W0302 00:15:02.038850 219 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.038: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8952, will wait for the garbage collector to delete the pods Mar 2 00:16:26.132: INFO: Deleting Job.batch foo took: 4.633124ms Mar 2 00:16:26.232: INFO: Terminating Job.batch foo pods took: 100.079642ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:22.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8952" for this suite. • [SLOW TEST:200.898 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":1,"skipped":49,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.802: INFO: >>> kubeConfig: /tmp/kubeconfig-2153458730 STEP: Building a namespace api object, basename services W0302 00:15:02.326378 217 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.326: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7722 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7722 STEP: creating replication controller externalsvc in namespace services-7722 I0302 00:15:02.450407 217 runners.go:193] Created replication controller with name: externalsvc, namespace: services-7722, replica count: 2 I0302 00:15:05.502129 217 runners.go:193] externalsvc Pods: 1 out of 2 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:08.504045 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:11.506345 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:14.506720 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:17.506937 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:20.508008 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:23.509855 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:26.510913 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:29.511077 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:32.513171 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:35.513843 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:38.514914 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:41.517173 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:44.517260 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:47.517866 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:50.519079 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:53.519568 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:56.521774 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:15:59.522841 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:02.524914 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:05.525843 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:08.526865 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:11.527697 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:14.527981 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:17.529845 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:20.531020 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:23.531272 217 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 2 00:16:23.626: INFO: Creating new exec pod Mar 2 00:17:21.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2153458730 --namespace=services-7722 exec execpodknj69 -- /bin/sh -x -c nslookup nodeport-service.services-7722.svc.cluster.local' Mar 2 00:17:21.735: INFO: stderr: "+ nslookup nodeport-service.services-7722.svc.cluster.local\n" Mar 2 00:17:21.735: INFO: stdout: "Server:\t\t10.43.0.10\nAddress:\t10.43.0.10#53\n\nnodeport-service.services-7722.svc.cluster.local\tcanonical name = externalsvc.services-7722.svc.cluster.local.\nName:\texternalsvc.services-7722.svc.cluster.local\nAddress: 10.43.19.71\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7722, will wait for the garbage collector to delete the pods Mar 2 00:17:21.794: INFO: Deleting ReplicationController externalsvc took: 4.386506ms Mar 2 00:17:21.895: INFO: Terminating ReplicationController externalsvc pods took: 101.12003ms Mar 2 00:18:26.212: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:26.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7722" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:204.437 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:50.391: INFO: >>> kubeConfig: /tmp/kubeconfig-2274336506 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7575 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7575 STEP: creating replication controller externalsvc in namespace services-7575 I0302 00:16:50.448153 517 runners.go:193] Created replication controller with name: externalsvc, namespace: services-7575, replica count: 2 I0302 00:16:53.499359 517 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:56.500102 517 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:59.500495 517 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:02.501598 517 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:05.502894 517 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:08.504976 517 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:11.506081 517 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 2 00:17:11.519: INFO: Creating new exec pod Mar 2 00:17:31.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2274336506 --namespace=services-7575 exec execpodt5p9h -- /bin/sh -x -c nslookup clusterip-service.services-7575.svc.cluster.local' Mar 2 00:17:31.669: INFO: stderr: "+ nslookup clusterip-service.services-7575.svc.cluster.local\n" Mar 2 00:17:31.670: INFO: stdout: "Server:\t\t10.43.0.10\nAddress:\t10.43.0.10#53\n\nclusterip-service.services-7575.svc.cluster.local\tcanonical name = externalsvc.services-7575.svc.cluster.local.\nName:\texternalsvc.services-7575.svc.cluster.local\nAddress: 10.43.124.211\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7575, will wait for the garbage collector to delete the pods Mar 2 00:17:31.729: INFO: Deleting ReplicationController externalsvc took: 5.869781ms Mar 2 00:17:31.830: INFO: Terminating ReplicationController externalsvc pods took: 100.424182ms Mar 2 00:18:27.650: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:27.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7575" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:97.318 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":2,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:45.747: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward api env vars Mar 2 00:17:45.782: INFO: Waiting up to 5m0s for pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4" in namespace "downward-api-1025" to be "Succeeded or Failed" Mar 2 00:17:45.788: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.760094ms Mar 2 00:17:47.794: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012127679s Mar 2 00:17:49.800: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018024378s Mar 2 00:17:51.804: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022303827s Mar 2 00:17:53.808: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025869138s Mar 2 00:17:55.811: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029360508s Mar 2 00:17:57.816: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.034320985s Mar 2 00:17:59.820: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.03864408s Mar 2 00:18:01.825: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.043113632s Mar 2 00:18:03.829: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.046745142s Mar 2 00:18:05.833: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.050891856s Mar 2 00:18:07.836: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.054526067s Mar 2 00:18:09.841: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.059045867s Mar 2 00:18:11.845: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.062924087s Mar 2 00:18:13.848: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.065800445s Mar 2 00:18:15.852: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.070495893s Mar 2 00:18:17.859: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.077144807s Mar 2 00:18:19.863: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.081574107s Mar 2 00:18:21.867: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.085118646s Mar 2 00:18:23.871: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.088938165s Mar 2 00:18:25.875: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.092973371s Mar 2 00:18:27.881: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.09879205s Mar 2 00:18:29.884: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.102600402s STEP: Saw pod success Mar 2 00:18:29.884: INFO: Pod "downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4" satisfied condition "Succeeded or Failed" Mar 2 00:18:29.887: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4 container dapi-container: STEP: delete the pod Mar 2 00:18:29.904: INFO: Waiting for pod downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4 to disappear Mar 2 00:18:29.909: INFO: Pod downward-api-dc02f632-7a08-424b-b90d-bcc920766fd4 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:29.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1025" for this suite. • [SLOW TEST:44.170 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:08.466: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating replication controller my-hostname-basic-c787d949-035b-4e4e-ba53-e126f2f6e08b Mar 2 00:17:08.488: INFO: Pod name my-hostname-basic-c787d949-035b-4e4e-ba53-e126f2f6e08b: Found 0 pods out of 1 Mar 2 00:17:13.496: INFO: Pod name my-hostname-basic-c787d949-035b-4e4e-ba53-e126f2f6e08b: Found 1 pods out of 1 Mar 2 00:17:13.496: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c787d949-035b-4e4e-ba53-e126f2f6e08b" are running Mar 2 00:18:25.507: INFO: Pod "my-hostname-basic-c787d949-035b-4e4e-ba53-e126f2f6e08b-cn6px" is running (conditions: [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-02 00:17:08 +0000 UTC Reason: Message:}]) Mar 2 00:18:25.507: INFO: Trying to dial the pod Mar 2 00:18:30.518: INFO: Controller my-hostname-basic-c787d949-035b-4e4e-ba53-e126f2f6e08b: Got expected result from replica 1 [my-hostname-basic-c787d949-035b-4e4e-ba53-e126f2f6e08b-cn6px]: "my-hostname-basic-c787d949-035b-4e4e-ba53-e126f2f6e08b-cn6px", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:30.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9114" for this suite. • [SLOW TEST:82.060 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:30.543: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a Service STEP: watching for the Service to be added Mar 2 00:18:30.573: INFO: Found Service test-service-tzsqh in namespace services-2821 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Mar 2 00:18:30.573: INFO: Service test-service-tzsqh created STEP: Getting /status Mar 2 00:18:30.578: INFO: Service test-service-tzsqh has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Mar 2 00:18:30.583: INFO: observed Service test-service-tzsqh in namespace services-2821 with annotations: map[] & LoadBalancer: {[]} Mar 2 00:18:30.583: INFO: Found Service test-service-tzsqh in namespace services-2821 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Mar 2 00:18:30.583: INFO: Service test-service-tzsqh has service status patched STEP: updating the ServiceStatus Mar 2 00:18:30.591: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Mar 2 00:18:30.592: INFO: Observed Service test-service-tzsqh in namespace services-2821 with annotations: map[] & Conditions: {[]} Mar 2 00:18:30.592: INFO: Observed event: &Service{ObjectMeta:{test-service-tzsqh services-2821 9d79ac56-f900-49c7-9560-e562a3c088d9 11557 0 2023-03-02 00:18:30 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-03-02 00:18:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-03-02 00:18:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.43.38.75,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.38.75],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Mar 2 00:18:30.592: INFO: Found Service test-service-tzsqh in namespace services-2821 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Mar 2 00:18:30.592: INFO: Service test-service-tzsqh has service status updated STEP: patching the service STEP: watching for the Service to be patched Mar 2 00:18:30.606: INFO: observed Service test-service-tzsqh in namespace services-2821 with labels: map[test-service-static:true] Mar 2 00:18:30.606: INFO: observed Service test-service-tzsqh in namespace services-2821 with labels: map[test-service-static:true] Mar 2 00:18:30.606: INFO: observed Service test-service-tzsqh in namespace services-2821 with labels: map[test-service-static:true] Mar 2 00:18:30.606: INFO: Found Service test-service-tzsqh in namespace services-2821 with labels: map[test-service:patched test-service-static:true] Mar 2 00:18:30.606: INFO: Service test-service-tzsqh patched STEP: deleting the service STEP: watching for the Service to be deleted Mar 2 00:18:30.633: INFO: Observed event: ADDED Mar 2 00:18:30.633: INFO: Observed event: MODIFIED Mar 2 00:18:30.633: INFO: Observed event: MODIFIED Mar 2 00:18:30.633: INFO: Observed event: MODIFIED Mar 2 00:18:30.633: INFO: Found Service test-service-tzsqh in namespace services-2821 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Mar 2 00:18:30.633: INFO: Service test-service-tzsqh deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:30.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2821" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":3,"skipped":76,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:23.878: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:30.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-102" for this suite. • [SLOW TEST:66.811 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":6,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:30.698: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating projection with secret that has name secret-emptykey-test-82419d6c-f864-4135-89b9-aeced0d1fb93 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:30.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3457" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":7,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:30.759: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:18:30.773: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:31.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3662" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":8,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:27.719: INFO: >>> kubeConfig: /tmp/kubeconfig-2274336506 STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:18:27.771: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4ac17f7c-b132-49ef-b313-2337498e856c", Controller:(*bool)(0xc00055f66e), BlockOwnerDeletion:(*bool)(0xc00055f66f)}} Mar 2 00:18:27.778: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b249a517-63d3-446e-b1fc-68dfca08693f", Controller:(*bool)(0xc0045297fe), BlockOwnerDeletion:(*bool)(0xc0045297ff)}} Mar 2 00:18:27.783: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"8dafd101-8244-4d54-8411-1d5619cd99c1", Controller:(*bool)(0xc004bb2996), BlockOwnerDeletion:(*bool)(0xc004bb2997)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:32.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8016" for this suite. • [SLOW TEST:5.087 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":3,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:07.128: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 2 00:18:07.151: INFO: Waiting up to 5m0s for pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b" in namespace "emptydir-184" to be "Succeeded or Failed" Mar 2 00:18:07.155: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158774ms Mar 2 00:18:09.159: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008070909s Mar 2 00:18:11.164: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012933124s Mar 2 00:18:13.168: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017468728s Mar 2 00:18:15.172: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021620311s Mar 2 00:18:17.175: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02484875s Mar 2 00:18:19.180: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029108685s Mar 2 00:18:21.184: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033554622s Mar 2 00:18:23.189: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.037947316s Mar 2 00:18:25.193: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.04191411s Mar 2 00:18:27.196: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.045670912s Mar 2 00:18:29.200: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.049576456s Mar 2 00:18:31.203: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.052402888s Mar 2 00:18:33.207: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.056716683s STEP: Saw pod success Mar 2 00:18:33.207: INFO: Pod "pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b" satisfied condition "Succeeded or Failed" Mar 2 00:18:33.210: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b container test-container: STEP: delete the pod Mar 2 00:18:33.226: INFO: Waiting for pod pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b to disappear Mar 2 00:18:33.230: INFO: Pod pod-bc87f5ea-56c4-4e8d-807e-76fb07e27f7b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:33.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-184" for this suite. • [SLOW TEST:26.109 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:35.033: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:17:35.369: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:17:37.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:39.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:41.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:43.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:45.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:47.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:49.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:51.385: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:53.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:55.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:57.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:59.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:01.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:03.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:05.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:07.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:09.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:11.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:13.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:15.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:17.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:19.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:21.385: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:23.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:25.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:27.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:29.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:31.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:18:34.397: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:34.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-734" for this suite. STEP: Destroying namespace "webhook-734-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:59.439 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":7,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:32.790: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:32.804: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Creating first CR Mar 2 00:17:35.330: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-02T00:17:35Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-02T00:17:35Z]] name:name1 resourceVersion:9559 uid:5d32a1a4-58d7-4782-939c-632251bd67f7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 2 00:17:45.335: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-02T00:17:45Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-02T00:17:45Z]] name:name2 resourceVersion:10237 uid:2db4c806-f5c5-4f99-b985-fb8cff84aaed] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 2 00:17:55.341: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-02T00:17:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-02T00:17:55Z]] name:name1 resourceVersion:10575 uid:5d32a1a4-58d7-4782-939c-632251bd67f7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 2 00:18:05.347: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-02T00:17:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-02T00:18:05Z]] name:name2 resourceVersion:10825 uid:2db4c806-f5c5-4f99-b985-fb8cff84aaed] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 2 00:18:15.354: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-02T00:17:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-02T00:17:55Z]] name:name1 resourceVersion:11115 uid:5d32a1a4-58d7-4782-939c-632251bd67f7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 2 00:18:25.362: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-03-02T00:17:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-03-02T00:18:05Z]] name:name2 resourceVersion:11364 uid:2db4c806-f5c5-4f99-b985-fb8cff84aaed] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:35.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9077" for this suite. • [SLOW TEST:63.092 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":6,"skipped":141,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:34.267: INFO: >>> kubeConfig: /tmp/kubeconfig-2645107774 STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod pod-subpath-test-downwardapi-2spw STEP: Creating a pod to test atomic-volume-subpath Mar 2 00:17:34.315: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2spw" in namespace "subpath-9388" to be "Succeeded or Failed" Mar 2 00:17:34.319: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028517ms Mar 2 00:17:36.341: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025939014s Mar 2 00:17:38.345: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030303195s Mar 2 00:17:40.352: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037399631s Mar 2 00:17:42.357: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042390589s Mar 2 00:17:44.363: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.048140486s Mar 2 00:17:46.367: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.052092279s Mar 2 00:17:48.372: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.05696524s Mar 2 00:17:50.377: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.062195407s Mar 2 00:17:52.381: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 18.065803744s Mar 2 00:17:54.384: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 20.069112691s Mar 2 00:17:56.390: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 22.074720501s Mar 2 00:17:58.394: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 24.078966374s Mar 2 00:18:00.398: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 26.083228105s Mar 2 00:18:02.402: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 28.087068836s Mar 2 00:18:04.406: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 30.091379578s Mar 2 00:18:06.413: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 32.098165293s Mar 2 00:18:08.419: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 34.103517167s Mar 2 00:18:10.426: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 36.110545501s Mar 2 00:18:12.430: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 38.114852127s Mar 2 00:18:14.434: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 40.118784s Mar 2 00:18:16.438: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 42.122722645s Mar 2 00:18:18.443: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 44.127810739s Mar 2 00:18:20.449: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 46.133555568s Mar 2 00:18:22.454: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 48.138886039s Mar 2 00:18:24.458: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 50.142668821s Mar 2 00:18:26.462: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 52.146481306s Mar 2 00:18:28.466: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 54.150589832s Mar 2 00:18:30.469: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 56.154282426s Mar 2 00:18:32.473: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 58.158285017s Mar 2 00:18:34.478: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.162934934s Mar 2 00:18:36.482: INFO: Pod "pod-subpath-test-downwardapi-2spw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m2.166868178s STEP: Saw pod success Mar 2 00:18:36.482: INFO: Pod "pod-subpath-test-downwardapi-2spw" satisfied condition "Succeeded or Failed" Mar 2 00:18:36.485: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-subpath-test-downwardapi-2spw container test-container-subpath-downwardapi-2spw: STEP: delete the pod Mar 2 00:18:36.501: INFO: Waiting for pod pod-subpath-test-downwardapi-2spw to disappear Mar 2 00:18:36.506: INFO: Pod pod-subpath-test-downwardapi-2spw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-2spw Mar 2 00:18:36.506: INFO: Deleting pod "pod-subpath-test-downwardapi-2spw" in namespace "subpath-9388" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:36.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9388" for this suite. • [SLOW TEST:62.250 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:03.009: INFO: >>> kubeConfig: /tmp/kubeconfig-3226875953 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Mar 2 00:17:03.405: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:17:03.418: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:17:05.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:07.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:09.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:11.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:13.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:15.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:17.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:19.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:21.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:23.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:25.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:27.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:29.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:31.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:33.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:35.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:37.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:39.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:41.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:43.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:45.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:47.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:49.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:51.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:53.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:55.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:57.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:17:59.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:01.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:03.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:05.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:07.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:09.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:11.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:13.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:15.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:17.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:19.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:21.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:18:24.440: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:36.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3175" for this suite. STEP: Destroying namespace "webhook-3175-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:93.613 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":3,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:22.661: INFO: >>> kubeConfig: /tmp/kubeconfig-3654524011 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:18:22.881: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:18:24.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:26.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:28.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:30.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:32.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:34.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:36.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 22, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:18:39.904: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:39.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9434" for this suite. STEP: Destroying namespace "webhook-9434-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.336 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":2,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:38.352: INFO: >>> kubeConfig: /tmp/kubeconfig-343363664 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 2 00:17:38.384: INFO: Waiting up to 5m0s for pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23" in namespace "emptydir-1559" to be "Succeeded or Failed" Mar 2 00:17:38.387: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.726965ms Mar 2 00:17:40.392: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007959043s Mar 2 00:17:42.396: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012019205s Mar 2 00:17:44.400: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016667541s Mar 2 00:17:46.405: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021456574s Mar 2 00:17:48.412: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028223169s Mar 2 00:17:50.416: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 12.032688745s Mar 2 00:17:52.420: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 14.036096732s Mar 2 00:17:54.423: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 16.039314296s Mar 2 00:17:56.428: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 18.044442184s Mar 2 00:17:58.434: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 20.050366654s Mar 2 00:18:00.439: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 22.054906945s Mar 2 00:18:02.444: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 24.05996728s Mar 2 00:18:04.447: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 26.062990789s Mar 2 00:18:06.456: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 28.072063555s Mar 2 00:18:08.461: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 30.077220019s Mar 2 00:18:10.464: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 32.080340415s Mar 2 00:18:12.470: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 34.086712933s Mar 2 00:18:14.474: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 36.090719287s Mar 2 00:18:16.478: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 38.094373271s Mar 2 00:18:18.482: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 40.09869955s Mar 2 00:18:20.490: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 42.105918097s Mar 2 00:18:22.493: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 44.109041368s Mar 2 00:18:24.499: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 46.114798748s Mar 2 00:18:26.504: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 48.119959062s Mar 2 00:18:28.508: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 50.123819948s Mar 2 00:18:30.512: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 52.128276122s Mar 2 00:18:32.517: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 54.132980805s Mar 2 00:18:34.523: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 56.139490653s Mar 2 00:18:36.530: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 58.146107903s Mar 2 00:18:38.535: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.150869258s Mar 2 00:18:40.539: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m2.155645759s STEP: Saw pod success Mar 2 00:18:40.539: INFO: Pod "pod-382277a5-698b-4fa1-9e6a-2d123f01fe23" satisfied condition "Succeeded or Failed" Mar 2 00:18:40.543: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-382277a5-698b-4fa1-9e6a-2d123f01fe23 container test-container: STEP: delete the pod Mar 2 00:18:40.568: INFO: Waiting for pod pod-382277a5-698b-4fa1-9e6a-2d123f01fe23 to disappear Mar 2 00:18:40.573: INFO: Pod pod-382277a5-698b-4fa1-9e6a-2d123f01fe23 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:40.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1559" for this suite. • [SLOW TEST:62.232 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":126,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:38.472: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating Pod STEP: Reading file content from the nginx-container Mar 2 00:18:40.701: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7072 PodName:pod-sharedvolume-919aeec2-894c-492f-a518-9ff6fd2db8b8 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:40.701: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 Mar 2 00:18:40.702: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:40.702: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/emptydir-7072/pods/pod-sharedvolume-919aeec2-894c-492f-a518-9ff6fd2db8b8/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:18:40.729: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:40.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7072" for this suite. • [SLOW TEST:62.266 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":4,"skipped":94,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:18.432: INFO: >>> kubeConfig: /tmp/kubeconfig-2889160273 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating service nodeport-test with type=NodePort in namespace services-7817 STEP: creating replication controller nodeport-test in namespace services-7817 I0302 00:17:18.469434 359 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-7817, replica count: 2 I0302 00:17:21.521828 359 runners.go:193] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:24.522899 359 runners.go:193] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:27.524991 359 runners.go:193] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:30.525997 359 runners.go:193] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:33.526941 359 runners.go:193] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:36.529289 359 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:17:36.529: INFO: Creating new exec pod Mar 2 00:18:41.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2889160273 --namespace=services-7817 exec execpodlztzx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Mar 2 00:18:41.676: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Mar 2 00:18:41.676: INFO: stdout: "nodeport-test-gw8fl" Mar 2 00:18:41.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2889160273 --namespace=services-7817 exec execpodlztzx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.50.85 80' Mar 2 00:18:41.771: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.50.85 80\nConnection to 10.43.50.85 80 port [tcp/http] succeeded!\n" Mar 2 00:18:41.771: INFO: stdout: "nodeport-test-dgqwf" Mar 2 00:18:41.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2889160273 --namespace=services-7817 exec execpodlztzx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.8 32398' Mar 2 00:18:41.863: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.8 32398\nConnection to 172.17.0.8 32398 port [tcp/*] succeeded!\n" Mar 2 00:18:41.863: INFO: stdout: "nodeport-test-dgqwf" Mar 2 00:18:41.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2889160273 --namespace=services-7817 exec execpodlztzx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.12 32398' Mar 2 00:18:41.942: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.12 32398\nConnection to 172.17.0.12 32398 port [tcp/*] succeeded!\n" Mar 2 00:18:41.942: INFO: stdout: "nodeport-test-dgqwf" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:41.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7817" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:83.522 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":2,"skipped":128,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:40.604: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-test-volume-49425f04-2cf2-4142-a464-2f5c8f29e3db STEP: Creating a pod to test consume configMaps Mar 2 00:17:40.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a" in namespace "configmap-5812" to be "Succeeded or Failed" Mar 2 00:17:40.727: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.592999ms Mar 2 00:17:42.731: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019569743s Mar 2 00:17:44.736: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024715295s Mar 2 00:17:46.741: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028968702s Mar 2 00:17:48.747: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034782209s Mar 2 00:17:50.757: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.045217118s Mar 2 00:17:52.760: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.048574961s Mar 2 00:17:54.764: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.051981755s Mar 2 00:17:56.768: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.056534193s Mar 2 00:17:58.772: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.060349601s Mar 2 00:18:00.776: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.064542123s Mar 2 00:18:02.780: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.068446015s Mar 2 00:18:04.785: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.073427912s Mar 2 00:18:06.789: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.077296582s Mar 2 00:18:08.792: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.080598856s Mar 2 00:18:10.796: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.084621036s Mar 2 00:18:12.801: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.088801755s Mar 2 00:18:14.804: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.09224463s Mar 2 00:18:16.807: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.09565295s Mar 2 00:18:18.812: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.100510929s Mar 2 00:18:20.816: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.104447587s Mar 2 00:18:22.819: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.107615685s Mar 2 00:18:24.823: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.11115748s Mar 2 00:18:26.827: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.114820484s Mar 2 00:18:28.829: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 48.117382075s Mar 2 00:18:30.832: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.120157819s Mar 2 00:18:32.838: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.125918409s Mar 2 00:18:34.841: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 54.129740886s Mar 2 00:18:36.845: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.133203589s Mar 2 00:18:38.848: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 58.135945532s Mar 2 00:18:40.852: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.140474484s Mar 2 00:18:42.856: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m2.144732991s STEP: Saw pod success Mar 2 00:18:42.857: INFO: Pod "pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a" satisfied condition "Succeeded or Failed" Mar 2 00:18:42.859: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a container agnhost-container: STEP: delete the pod Mar 2 00:18:42.875: INFO: Waiting for pod pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a to disappear Mar 2 00:18:42.883: INFO: Pod pod-configmaps-3c8c3a4b-cef9-4f8c-87d3-4aa16716a50a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:42.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5812" for this suite. • [SLOW TEST:62.297 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:53.510: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating Agnhost RC Mar 2 00:17:53.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=kubectl-8898 create -f -' Mar 2 00:17:53.934: INFO: stderr: "" Mar 2 00:17:53.934: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Mar 2 00:17:54.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:17:54.937: INFO: Found 0 / 1 Mar 2 00:17:55.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:17:55.937: INFO: Found 0 / 1 Mar 2 00:17:56.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:17:56.937: INFO: Found 0 / 1 Mar 2 00:17:57.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:17:57.938: INFO: Found 0 / 1 Mar 2 00:17:58.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:17:58.937: INFO: Found 0 / 1 Mar 2 00:17:59.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:17:59.938: INFO: Found 0 / 1 Mar 2 00:18:00.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:00.938: INFO: Found 0 / 1 Mar 2 00:18:01.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:01.937: INFO: Found 0 / 1 Mar 2 00:18:02.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:02.937: INFO: Found 0 / 1 Mar 2 00:18:03.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:03.937: INFO: Found 0 / 1 Mar 2 00:18:04.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:04.937: INFO: Found 0 / 1 Mar 2 00:18:05.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:05.937: INFO: Found 0 / 1 Mar 2 00:18:06.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:06.937: INFO: Found 0 / 1 Mar 2 00:18:07.936: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:07.936: INFO: Found 0 / 1 Mar 2 00:18:08.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:08.937: INFO: Found 0 / 1 Mar 2 00:18:09.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:09.938: INFO: Found 0 / 1 Mar 2 00:18:10.939: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:10.939: INFO: Found 0 / 1 Mar 2 00:18:11.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:11.937: INFO: Found 0 / 1 Mar 2 00:18:12.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:12.939: INFO: Found 0 / 1 Mar 2 00:18:13.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:13.937: INFO: Found 0 / 1 Mar 2 00:18:14.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:14.938: INFO: Found 0 / 1 Mar 2 00:18:15.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:15.937: INFO: Found 0 / 1 Mar 2 00:18:16.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:16.937: INFO: Found 0 / 1 Mar 2 00:18:17.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:17.937: INFO: Found 0 / 1 Mar 2 00:18:18.941: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:18.941: INFO: Found 0 / 1 Mar 2 00:18:19.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:19.938: INFO: Found 0 / 1 Mar 2 00:18:20.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:20.938: INFO: Found 0 / 1 Mar 2 00:18:21.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:21.937: INFO: Found 0 / 1 Mar 2 00:18:22.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:22.938: INFO: Found 0 / 1 Mar 2 00:18:23.951: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:23.951: INFO: Found 0 / 1 Mar 2 00:18:24.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:24.938: INFO: Found 0 / 1 Mar 2 00:18:25.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:25.937: INFO: Found 0 / 1 Mar 2 00:18:26.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:26.938: INFO: Found 0 / 1 Mar 2 00:18:27.939: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:27.939: INFO: Found 0 / 1 Mar 2 00:18:28.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:28.938: INFO: Found 0 / 1 Mar 2 00:18:29.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:29.938: INFO: Found 0 / 1 Mar 2 00:18:30.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:30.937: INFO: Found 0 / 1 Mar 2 00:18:31.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:31.937: INFO: Found 0 / 1 Mar 2 00:18:32.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:32.938: INFO: Found 0 / 1 Mar 2 00:18:33.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:33.937: INFO: Found 0 / 1 Mar 2 00:18:34.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:34.938: INFO: Found 0 / 1 Mar 2 00:18:35.949: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:35.949: INFO: Found 0 / 1 Mar 2 00:18:36.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:36.938: INFO: Found 0 / 1 Mar 2 00:18:37.939: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:37.939: INFO: Found 0 / 1 Mar 2 00:18:38.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:38.937: INFO: Found 0 / 1 Mar 2 00:18:39.940: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:39.940: INFO: Found 0 / 1 Mar 2 00:18:40.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:40.938: INFO: Found 0 / 1 Mar 2 00:18:41.939: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:41.939: INFO: Found 0 / 1 Mar 2 00:18:42.942: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:42.942: INFO: Found 0 / 1 Mar 2 00:18:43.937: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:43.937: INFO: Found 0 / 1 Mar 2 00:18:44.938: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:44.938: INFO: Found 1 / 1 Mar 2 00:18:44.938: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 2 00:18:44.941: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:44.941: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 2 00:18:44.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=kubectl-8898 patch pod agnhost-primary-cpff4 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 2 00:18:44.990: INFO: stderr: "" Mar 2 00:18:44.990: INFO: stdout: "pod/agnhost-primary-cpff4 patched\n" STEP: checking annotations Mar 2 00:18:44.993: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:18:44.993: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:44.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8898" for this suite. • [SLOW TEST:51.493 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1485 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":9,"skipped":232,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:58.711: INFO: >>> kubeConfig: /tmp/kubeconfig-3825818549 STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Performing setup for networking test in namespace pod-network-test-4057 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 2 00:16:58.738: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 2 00:16:58.763: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:00.766: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:02.767: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:04.766: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:06.766: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:08.765: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:10.767: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:12.767: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:14.766: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:16.769: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 2 00:17:16.776: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:18.781: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:20.780: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:22.779: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:24.786: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:26.780: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:28.780: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:30.781: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:32.780: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:34.781: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:36.784: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:38.780: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 2 00:18:42.821: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 2 00:18:42.821: INFO: Going to poll 10.42.0.158 on port 8081 at least 0 times, with a maximum of 34 tries before failing Mar 2 00:18:42.824: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.42.0.158 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4057 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:42.824: INFO: >>> kubeConfig: /tmp/kubeconfig-3825818549 Mar 2 00:18:42.824: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:42.825: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/pod-network-test-4057/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.42.0.158+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:18:43.921: INFO: Found all 1 expected endpoints: [netserver-0] Mar 2 00:18:43.921: INFO: Going to poll 10.42.1.147 on port 8081 at least 0 times, with a maximum of 34 tries before failing Mar 2 00:18:43.923: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.42.1.147 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4057 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:43.923: INFO: >>> kubeConfig: /tmp/kubeconfig-3825818549 Mar 2 00:18:43.924: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:43.924: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/pod-network-test-4057/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.42.1.147+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:18:45.012: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:45.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4057" for this suite. • [SLOW TEST:106.314 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":90,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:31.829: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod pod-subpath-test-projected-mjm7 STEP: Creating a pod to test atomic-volume-subpath Mar 2 00:17:31.861: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mjm7" in namespace "subpath-3727" to be "Succeeded or Failed" Mar 2 00:17:31.865: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.544638ms Mar 2 00:17:33.869: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008195713s Mar 2 00:17:35.873: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011748974s Mar 2 00:17:37.877: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016070963s Mar 2 00:17:39.881: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019348428s Mar 2 00:17:41.884: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02287821s Mar 2 00:17:43.899: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.037364519s Mar 2 00:17:45.905: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.043404383s Mar 2 00:17:47.909: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.047715165s Mar 2 00:17:49.914: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.052993922s Mar 2 00:17:51.918: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.056942122s Mar 2 00:17:53.922: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.061013925s Mar 2 00:17:55.928: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.067270935s Mar 2 00:17:57.933: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.071943235s Mar 2 00:17:59.936: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.074996521s Mar 2 00:18:01.941: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.079382786s Mar 2 00:18:03.943: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.082145798s Mar 2 00:18:05.947: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.086056032s Mar 2 00:18:07.951: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.089544127s Mar 2 00:18:09.955: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.093745323s Mar 2 00:18:11.959: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.09811678s Mar 2 00:18:13.962: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.100912363s Mar 2 00:18:15.966: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.104844193s Mar 2 00:18:17.970: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 46.108979867s Mar 2 00:18:19.976: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 48.114965201s Mar 2 00:18:21.980: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 50.119252691s Mar 2 00:18:23.984: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 52.12285604s Mar 2 00:18:25.988: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 54.126911665s Mar 2 00:18:27.993: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 56.131441599s Mar 2 00:18:30.001: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 58.140024975s Mar 2 00:18:32.004: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.14329334s Mar 2 00:18:34.007: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.146323771s Mar 2 00:18:36.017: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.155489456s Mar 2 00:18:38.021: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.159722837s Mar 2 00:18:40.026: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.16481238s Mar 2 00:18:42.030: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.168353542s Mar 2 00:18:44.032: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.171269746s Mar 2 00:18:46.038: INFO: Pod "pod-subpath-test-projected-mjm7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m14.176558721s STEP: Saw pod success Mar 2 00:18:46.038: INFO: Pod "pod-subpath-test-projected-mjm7" satisfied condition "Succeeded or Failed" Mar 2 00:18:46.040: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-subpath-test-projected-mjm7 container test-container-subpath-projected-mjm7: STEP: delete the pod Mar 2 00:18:46.058: INFO: Waiting for pod pod-subpath-test-projected-mjm7 to disappear Mar 2 00:18:46.064: INFO: Pod pod-subpath-test-projected-mjm7 no longer exists STEP: Deleting pod pod-subpath-test-projected-mjm7 Mar 2 00:18:46.064: INFO: Deleting pod "pod-subpath-test-projected-mjm7" in namespace "subpath-3727" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:46.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3727" for this suite. • [SLOW TEST:74.246 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":4,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:46.087: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:46.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-609" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":5,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:56.728: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:17:56.747: INFO: Creating ReplicaSet my-hostname-basic-faa823ae-2f0f-48e2-bce6-752ce4ceef20 Mar 2 00:17:56.758: INFO: Pod name my-hostname-basic-faa823ae-2f0f-48e2-bce6-752ce4ceef20: Found 0 pods out of 1 Mar 2 00:18:01.762: INFO: Pod name my-hostname-basic-faa823ae-2f0f-48e2-bce6-752ce4ceef20: Found 1 pods out of 1 Mar 2 00:18:01.762: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-faa823ae-2f0f-48e2-bce6-752ce4ceef20" is running Mar 2 00:18:41.772: INFO: Pod "my-hostname-basic-faa823ae-2f0f-48e2-bce6-752ce4ceef20-qlmg9" is running (conditions: [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-02 00:17:56 +0000 UTC Reason: Message:}]) Mar 2 00:18:41.772: INFO: Trying to dial the pod Mar 2 00:18:46.790: INFO: Controller my-hostname-basic-faa823ae-2f0f-48e2-bce6-752ce4ceef20: Got expected result from replica 1 [my-hostname-basic-faa823ae-2f0f-48e2-bce6-752ce4ceef20-qlmg9]: "my-hostname-basic-faa823ae-2f0f-48e2-bce6-752ce4ceef20-qlmg9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:46.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3674" for this suite. • [SLOW TEST:50.070 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":8,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":55,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:34.987: INFO: >>> kubeConfig: /tmp/kubeconfig-446130155 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating service in namespace services-2970 STEP: creating service affinity-clusterip in namespace services-2970 STEP: creating replication controller affinity-clusterip in namespace services-2970 I0302 00:16:35.021832 29 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-2970, replica count: 3 I0302 00:16:38.073383 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:41.073841 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:44.074589 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:47.076659 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:50.077839 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:53.078785 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:56.079558 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:16:59.080439 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:02.080857 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:05.081653 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:08.081841 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:11.082924 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:14.083086 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:17.084436 29 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:17:17.090: INFO: Creating new exec pod Mar 2 00:17:28.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=services-2970 exec execpod-affinitypfv87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Mar 2 00:17:28.249: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Mar 2 00:17:28.249: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:17:28.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=services-2970 exec execpod-affinitypfv87 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.44.241 80' Mar 2 00:17:28.414: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.44.241 80\nConnection to 10.43.44.241 80 port [tcp/http] succeeded!\n" Mar 2 00:17:28.414: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:17:28.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=services-2970 exec execpod-affinitypfv87 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.43.44.241:80/ ; done' Mar 2 00:17:28.556: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.44.241:80/\n" Mar 2 00:17:28.556: INFO: stdout: "\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp\naffinity-clusterip-hlddp" Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Received response from host: affinity-clusterip-hlddp Mar 2 00:17:28.556: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2970, will wait for the garbage collector to delete the pods Mar 2 00:17:28.624: INFO: Deleting ReplicationController affinity-clusterip took: 5.161818ms Mar 2 00:17:28.724: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.823017ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:47.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2970" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:132.076 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":55,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:45.349: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 STEP: create the container to handle the HTTPGet hook request. Mar 2 00:16:45.377: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:47.381: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:49.381: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:51.382: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:53.382: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:55.381: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:57.382: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:59.381: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:01.381: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:03.382: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:05.381: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:07.385: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:09.381: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:11.382: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:13.381: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:15.381: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:17.382: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:19.383: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:21.381: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the pod with lifecycle hook Mar 2 00:17:21.396: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:23.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:25.401: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:27.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:29.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:31.401: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:33.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:35.408: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:37.401: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:39.399: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:41.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:43.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:45.402: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:47.401: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:49.401: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:51.401: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:53.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:55.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:57.402: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:59.402: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:01.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:03.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:05.402: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:07.405: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:09.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:11.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:13.402: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:15.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:17.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:19.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:21.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:23.400: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:25.400: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Mar 2 00:18:25.409: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:25.414: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:27.414: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:27.417: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:29.414: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:29.417: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:31.415: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:31.420: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:33.416: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:33.419: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:35.415: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:35.419: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:37.416: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:37.419: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:39.414: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:39.418: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:41.415: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:41.418: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:43.415: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:43.418: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:45.414: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:45.419: INFO: Pod pod-with-prestop-http-hook still exists Mar 2 00:18:47.415: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 2 00:18:47.420: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:47.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4847" for this suite. • [SLOW TEST:122.114 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:47.492: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:47.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3954" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":4,"skipped":113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:10.384: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7175.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7175.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 207.242.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.242.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.242.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.242.207_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7175.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7175.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7175.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7175.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7175.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 207.242.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.242.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.242.43.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.43.242.207_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 2 00:18:44.460: INFO: Unable to read wheezy_udp@dns-test-service.dns-7175.svc.cluster.local from pod dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262: the server could not find the requested resource (get pods dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262) Mar 2 00:18:44.463: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7175.svc.cluster.local from pod dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262: the server could not find the requested resource (get pods dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262) Mar 2 00:18:44.466: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local from pod dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262: the server could not find the requested resource (get pods dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262) Mar 2 00:18:44.468: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local from pod dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262: the server could not find the requested resource (get pods dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262) Mar 2 00:18:44.481: INFO: Unable to read jessie_udp@dns-test-service.dns-7175.svc.cluster.local from pod dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262: the server could not find the requested resource (get pods dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262) Mar 2 00:18:44.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-7175.svc.cluster.local from pod dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262: the server could not find the requested resource (get pods dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262) Mar 2 00:18:44.487: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local from pod dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262: the server could not find the requested resource (get pods dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262) Mar 2 00:18:44.490: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local from pod dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262: the server could not find the requested resource (get pods dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262) Mar 2 00:18:44.503: INFO: Lookups using dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262 failed for: [wheezy_udp@dns-test-service.dns-7175.svc.cluster.local wheezy_tcp@dns-test-service.dns-7175.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local jessie_udp@dns-test-service.dns-7175.svc.cluster.local jessie_tcp@dns-test-service.dns-7175.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7175.svc.cluster.local] Mar 2 00:18:49.556: INFO: DNS probes using dns-7175/dns-test-bb41598e-5e40-4515-a6f1-b18abf27f262 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:49.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7175" for this suite. • [SLOW TEST:39.255 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":4,"skipped":81,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:13.652: INFO: >>> kubeConfig: /tmp/kubeconfig-3780238515 STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:18:13.680: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802" in namespace "security-context-test-9972" to be "Succeeded or Failed" Mar 2 00:18:13.686: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 5.295091ms Mar 2 00:18:15.692: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012198289s Mar 2 00:18:17.697: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016549461s Mar 2 00:18:19.701: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020235439s Mar 2 00:18:21.706: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025480325s Mar 2 00:18:23.710: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029490826s Mar 2 00:18:25.716: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 12.035466184s Mar 2 00:18:27.721: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 14.04052435s Mar 2 00:18:29.725: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 16.044398736s Mar 2 00:18:31.730: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 18.050055734s Mar 2 00:18:33.735: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 20.054427299s Mar 2 00:18:35.743: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 22.062964792s Mar 2 00:18:37.748: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 24.067395726s Mar 2 00:18:39.755: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 26.074800774s Mar 2 00:18:41.759: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 28.079068133s Mar 2 00:18:43.764: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 30.08369347s Mar 2 00:18:45.769: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 32.088845425s Mar 2 00:18:47.780: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 34.099749766s Mar 2 00:18:49.786: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Pending", Reason="", readiness=false. Elapsed: 36.105798078s Mar 2 00:18:51.791: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.110591906s Mar 2 00:18:51.791: INFO: Pod "busybox-user-65534-4641a973-af6f-40be-9a3e-345e4eb99802" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:51.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9972" for this suite. • [SLOW TEST:38.149 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:46.811: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward api env vars Mar 2 00:18:46.834: INFO: Waiting up to 5m0s for pod "downward-api-c536598f-2e30-4bba-8b0e-5e4e6e0d6e6c" in namespace "downward-api-6313" to be "Succeeded or Failed" Mar 2 00:18:46.837: INFO: Pod "downward-api-c536598f-2e30-4bba-8b0e-5e4e6e0d6e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746443ms Mar 2 00:18:48.841: INFO: Pod "downward-api-c536598f-2e30-4bba-8b0e-5e4e6e0d6e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007027727s Mar 2 00:18:50.845: INFO: Pod "downward-api-c536598f-2e30-4bba-8b0e-5e4e6e0d6e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010760998s Mar 2 00:18:52.850: INFO: Pod "downward-api-c536598f-2e30-4bba-8b0e-5e4e6e0d6e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015614513s STEP: Saw pod success Mar 2 00:18:52.850: INFO: Pod "downward-api-c536598f-2e30-4bba-8b0e-5e4e6e0d6e6c" satisfied condition "Succeeded or Failed" Mar 2 00:18:52.853: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downward-api-c536598f-2e30-4bba-8b0e-5e4e6e0d6e6c container dapi-container: STEP: delete the pod Mar 2 00:18:52.879: INFO: Waiting for pod downward-api-c536598f-2e30-4bba-8b0e-5e4e6e0d6e6c to disappear Mar 2 00:18:52.888: INFO: Pod downward-api-c536598f-2e30-4bba-8b0e-5e4e6e0d6e6c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:52.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6313" for this suite. • [SLOW TEST:6.086 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":243,"failed":0} S ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:02.852: INFO: >>> kubeConfig: /tmp/kubeconfig-1090799654 STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Mar 2 00:15:02.881: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:04.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:06.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:08.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:10.885: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:12.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:14.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:16.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:18.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:20.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:22.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:24.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:26.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:28.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:30.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:32.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:34.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:36.885: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:38.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:40.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:42.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:44.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:46.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:48.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:50.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:52.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:54.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:56.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:15:58.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:00.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:02.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:04.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:06.885: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:08.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:10.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:12.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:14.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:16.885: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:18.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:20.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:22.885: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:24.885: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:26.887: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:28.889: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:30.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:32.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:34.885: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:36.886: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:38.884: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:40.885: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:42.888: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.17.0.8 on the node which pod1 resides and expect scheduled Mar 2 00:16:42.911: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:44.916: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:46.915: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:48.914: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:50.917: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:52.916: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:54.915: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:56.916: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:58.916: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:00.915: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:02.915: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:04.914: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:06.914: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:08.914: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:10.914: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:12.917: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:14.916: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.17.0.8 but use UDP protocol on the node which pod2 resides Mar 2 00:17:14.934: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:16.938: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:18.937: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:20.939: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:22.938: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:24.937: INFO: The status of Pod pod3 is Running (Ready = true) Mar 2 00:17:24.945: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:26.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:28.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:30.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:32.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:34.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:36.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:38.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:40.955: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:42.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:44.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:46.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:48.951: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:50.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:52.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:54.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:56.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:58.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:00.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:02.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:04.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:06.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:08.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:10.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:12.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:14.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:16.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:18.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:20.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:22.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:24.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:26.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:28.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:30.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:32.951: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:34.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:36.948: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:38.950: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:40.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:42.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:44.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:46.949: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:48.949: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Mar 2 00:18:48.952: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.17.0.8 http://127.0.0.1:54323/hostname] Namespace:hostport-4826 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:48.952: INFO: >>> kubeConfig: /tmp/kubeconfig-1090799654 Mar 2 00:18:48.953: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:48.953: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/hostport-4826/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.17.0.8+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.17.0.8, port: 54323 Mar 2 00:18:48.985: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.17.0.8:54323/hostname] Namespace:hostport-4826 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:48.985: INFO: >>> kubeConfig: /tmp/kubeconfig-1090799654 Mar 2 00:18:48.986: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:48.986: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/hostport-4826/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.17.0.8%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.17.0.8, port: 54323 UDP Mar 2 00:18:49.015: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.17.0.8 54323] Namespace:hostport-4826 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:49.015: INFO: >>> kubeConfig: /tmp/kubeconfig-1090799654 Mar 2 00:18:49.016: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:49.016: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/hostport-4826/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.17.0.8+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:54.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-4826" for this suite. • [SLOW TEST:231.203 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:30.650: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 2 00:18:54.715: INFO: DNS probes using dns-7891/dns-test-1ef5a4c5-79d3-46d4-9b2c-05d2fb5ae726 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:54.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7891" for this suite. • [SLOW TEST:24.084 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":4,"skipped":90,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:49.649: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:49.670: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-1277 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:55.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-9171" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:55.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-1277" for this suite. • [SLOW TEST:6.132 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":5,"skipped":96,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:07.871: INFO: >>> kubeConfig: /tmp/kubeconfig-1263463171 STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Performing setup for networking test in namespace pod-network-test-7678 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 2 00:17:07.890: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 2 00:17:07.914: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:09.919: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:11.919: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:13.917: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:15.919: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:17.918: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:19.920: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:21.918: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:23.919: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:25.919: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:27.919: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:29.918: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:31.919: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:33.918: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:35.919: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:37.920: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:39.921: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:41.919: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 2 00:17:43.940: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 2 00:17:43.992: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:17:45.998: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:17:47.997: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:17:50.001: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:17:51.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:17:53.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:17:55.997: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:17:57.997: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:17:59.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:01.999: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:03.995: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:05.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:07.997: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:09.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:11.997: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:13.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:15.997: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:17.998: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:19.997: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:21.997: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:23.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:25.997: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:27.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:30.001: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:31.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:33.995: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:36.000: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:37.996: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 2 00:18:40.003: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 2 00:18:56.038: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 2 00:18:56.038: INFO: Breadth first check of 10.42.0.167 on host 172.17.0.12... Mar 2 00:18:56.040: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.0.247:9080/dial?request=hostname&protocol=http&host=10.42.0.167&port=8083&tries=1'] Namespace:pod-network-test-7678 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:56.040: INFO: >>> kubeConfig: /tmp/kubeconfig-1263463171 Mar 2 00:18:56.041: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:56.041: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/pod-network-test-7678/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.42.0.247%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.42.0.167%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:18:56.114: INFO: Waiting for responses: map[] Mar 2 00:18:56.114: INFO: reached 10.42.0.167 after 0/1 tries Mar 2 00:18:56.114: INFO: Breadth first check of 10.42.1.154 on host 172.17.0.8... Mar 2 00:18:56.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.42.0.247:9080/dial?request=hostname&protocol=http&host=10.42.1.154&port=8083&tries=1'] Namespace:pod-network-test-7678 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:56.117: INFO: >>> kubeConfig: /tmp/kubeconfig-1263463171 Mar 2 00:18:56.117: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:56.117: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/pod-network-test-7678/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.42.0.247%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.42.1.154%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:18:56.146: INFO: Waiting for responses: map[] Mar 2 00:18:56.146: INFO: reached 10.42.1.154 after 0/1 tries Mar 2 00:18:56.146: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:56.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7678" for this suite. • [SLOW TEST:108.284 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:21.504: INFO: >>> kubeConfig: /tmp/kubeconfig-1252640407 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:18:21.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd" in namespace "downward-api-2784" to be "Succeeded or Failed" Mar 2 00:18:21.531: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.025814ms Mar 2 00:18:23.534: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006238429s Mar 2 00:18:25.538: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010040191s Mar 2 00:18:27.543: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015188827s Mar 2 00:18:29.546: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018513349s Mar 2 00:18:31.550: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02242332s Mar 2 00:18:33.555: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026872322s Mar 2 00:18:35.559: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031149596s Mar 2 00:18:37.563: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.035525535s Mar 2 00:18:39.569: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.041005518s Mar 2 00:18:41.572: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.044483288s Mar 2 00:18:43.576: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.047856207s Mar 2 00:18:45.580: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.051787172s Mar 2 00:18:47.594: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.066412918s Mar 2 00:18:49.603: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.075081423s Mar 2 00:18:51.608: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.079570952s Mar 2 00:18:53.611: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.083092141s Mar 2 00:18:55.616: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.08784208s Mar 2 00:18:57.621: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.09327331s STEP: Saw pod success Mar 2 00:18:57.621: INFO: Pod "downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd" satisfied condition "Succeeded or Failed" Mar 2 00:18:57.624: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd container client-container: STEP: delete the pod Mar 2 00:18:57.641: INFO: Waiting for pod downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd to disappear Mar 2 00:18:57.645: INFO: Pod downwardapi-volume-58266703-4ff4-4e89-b3a4-ed62ca619edd no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:57.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2784" for this suite. • [SLOW TEST:36.149 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:33.245: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward api env vars Mar 2 00:18:33.271: INFO: Waiting up to 5m0s for pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3" in namespace "downward-api-3030" to be "Succeeded or Failed" Mar 2 00:18:33.275: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.541481ms Mar 2 00:18:35.280: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008923293s Mar 2 00:18:37.287: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015635756s Mar 2 00:18:39.289: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018436803s Mar 2 00:18:41.296: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025090761s Mar 2 00:18:43.300: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.029173726s Mar 2 00:18:45.307: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.036492753s Mar 2 00:18:47.314: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.043043058s Mar 2 00:18:49.317: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.046234704s Mar 2 00:18:51.321: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.049853249s Mar 2 00:18:53.325: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.054375107s Mar 2 00:18:55.329: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.058570862s Mar 2 00:18:57.334: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.062751257s Mar 2 00:18:59.339: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.068048358s STEP: Saw pod success Mar 2 00:18:59.339: INFO: Pod "downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3" satisfied condition "Succeeded or Failed" Mar 2 00:18:59.343: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3 container dapi-container: STEP: delete the pod Mar 2 00:18:59.365: INFO: Waiting for pod downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3 to disappear Mar 2 00:18:59.370: INFO: Pod downward-api-15f6cb8d-96f6-4b1c-8e4c-5cda6140d6a3 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:18:59.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3030" for this suite. • [SLOW TEST:26.137 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:37.815: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Setting up the test STEP: Creating hostNetwork=false pod Mar 2 00:17:37.850: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:39.856: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:41.856: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:43.872: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:45.857: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:47.857: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:49.858: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:51.855: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:53.853: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:55.854: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:57.854: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:59.854: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:01.856: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:03.853: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:05.855: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:07.854: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:09.854: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:11.854: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:13.853: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:15.855: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:17.856: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:19.854: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:21.853: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:23.855: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:25.855: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:27.856: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:29.854: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:31.857: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Mar 2 00:18:31.879: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:33.882: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:35.889: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:37.884: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:39.883: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:41.886: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:43.882: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:45.886: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:47.884: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:49.883: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:51.884: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:53.883: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:55.884: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:57.883: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:59.882: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 2 00:18:59.885: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:59.885: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:18:59.885: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:59.885: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:18:59.949: INFO: Exec stderr: "" Mar 2 00:18:59.949: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:59.949: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:18:59.950: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:59.950: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:18:59.980: INFO: Exec stderr: "" Mar 2 00:18:59.980: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:18:59.980: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:18:59.981: INFO: ExecWithOptions: Clientset creation Mar 2 00:18:59.981: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:19:00.013: INFO: Exec stderr: "" Mar 2 00:19:00.014: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:00.014: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:19:00.014: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:00.014: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:19:00.044: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 2 00:19:00.044: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:00.044: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:19:00.044: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:00.044: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:19:00.073: INFO: Exec stderr: "" Mar 2 00:19:00.073: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:00.073: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:19:00.074: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:00.074: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:19:00.104: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 2 00:19:00.104: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:00.104: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:19:00.105: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:00.105: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:19:00.135: INFO: Exec stderr: "" Mar 2 00:19:00.135: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:00.135: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:19:00.135: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:00.135: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:19:00.163: INFO: Exec stderr: "" Mar 2 00:19:00.163: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:00.163: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:19:00.164: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:00.164: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:19:00.193: INFO: Exec stderr: "" Mar 2 00:19:00.193: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7030 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:00.193: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 Mar 2 00:19:00.193: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:00.193: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-7030/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:19:00.223: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:00.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7030" for this suite. • [SLOW TEST:82.417 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:47.616: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:18:47.676: INFO: The status of Pod busybox-readonly-fs6f393271-7303-4386-89b9-0e6f31b5a4ee is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:49.680: INFO: The status of Pod busybox-readonly-fs6f393271-7303-4386-89b9-0e6f31b5a4ee is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:51.681: INFO: The status of Pod busybox-readonly-fs6f393271-7303-4386-89b9-0e6f31b5a4ee is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:53.680: INFO: The status of Pod busybox-readonly-fs6f393271-7303-4386-89b9-0e6f31b5a4ee is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:55.684: INFO: The status of Pod busybox-readonly-fs6f393271-7303-4386-89b9-0e6f31b5a4ee is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:57.681: INFO: The status of Pod busybox-readonly-fs6f393271-7303-4386-89b9-0e6f31b5a4ee is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:59.681: INFO: The status of Pod busybox-readonly-fs6f393271-7303-4386-89b9-0e6f31b5a4ee is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:01.682: INFO: The status of Pod busybox-readonly-fs6f393271-7303-4386-89b9-0e6f31b5a4ee is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:01.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6121" for this suite. • [SLOW TEST:14.085 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:35.886: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:18:35.936: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51" in namespace "projected-1314" to be "Succeeded or Failed" Mar 2 00:18:35.953: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 16.537303ms Mar 2 00:18:37.956: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019506193s Mar 2 00:18:39.963: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026729536s Mar 2 00:18:41.967: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030454207s Mar 2 00:18:43.969: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033366052s Mar 2 00:18:45.973: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 10.037336013s Mar 2 00:18:47.978: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 12.042066436s Mar 2 00:18:49.984: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 14.047390062s Mar 2 00:18:51.989: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 16.052522823s Mar 2 00:18:53.993: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 18.056684661s Mar 2 00:18:55.997: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 20.060693103s Mar 2 00:18:58.001: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 22.064685733s Mar 2 00:19:00.006: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 24.070187145s Mar 2 00:19:02.011: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.074809115s STEP: Saw pod success Mar 2 00:19:02.011: INFO: Pod "downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51" satisfied condition "Succeeded or Failed" Mar 2 00:19:02.014: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51 container client-container: STEP: delete the pod Mar 2 00:19:02.038: INFO: Waiting for pod downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51 to disappear Mar 2 00:19:02.043: INFO: Pod downwardapi-volume-04f85fd0-1aca-4e17-83f5-23bd77b2ab51 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:02.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1314" for this suite. • [SLOW TEST:26.166 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":147,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:42.910: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name projected-configmap-test-volume-map-5f383923-1083-471b-b4f8-6b4c9f53ce0d STEP: Creating a pod to test consume configMaps Mar 2 00:18:42.951: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50" in namespace "projected-1223" to be "Succeeded or Failed" Mar 2 00:18:42.953: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.764618ms Mar 2 00:18:44.957: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006205311s Mar 2 00:18:46.961: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010042514s Mar 2 00:18:48.964: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013744047s Mar 2 00:18:50.969: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018588648s Mar 2 00:18:52.973: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022718961s Mar 2 00:18:54.978: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027570961s Mar 2 00:18:56.985: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 14.034539587s Mar 2 00:18:58.988: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 16.037527604s Mar 2 00:19:00.994: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043845068s Mar 2 00:19:02.999: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.048895145s STEP: Saw pod success Mar 2 00:19:03.000: INFO: Pod "pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50" satisfied condition "Succeeded or Failed" Mar 2 00:19:03.003: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50 container agnhost-container: STEP: delete the pod Mar 2 00:19:03.035: INFO: Waiting for pod pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50 to disappear Mar 2 00:19:03.040: INFO: Pod pod-projected-configmaps-9ca049d7-fb06-47e6-8e1e-4c36388b1b50 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:03.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1223" for this suite. • [SLOW TEST:20.141 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:34.948: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod STEP: submitting the pod to kubernetes Mar 2 00:17:34.984: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:36.990: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:38.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:40.998: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:42.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:44.987: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:46.990: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:48.987: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:50.991: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:52.987: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:54.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:56.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:58.987: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:00.989: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:02.989: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:04.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:06.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:08.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:10.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:12.989: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:14.990: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:16.989: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:18.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:21.019: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:22.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:24.988: INFO: The status of Pod pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 2 00:18:25.501: INFO: Successfully updated pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528" Mar 2 00:18:25.501: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528" in namespace "pods-6757" to be "terminated due to deadline exceeded" Mar 2 00:18:25.505: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 3.246493ms Mar 2 00:18:27.510: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 2.008852347s Mar 2 00:18:29.514: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 4.012516924s Mar 2 00:18:31.518: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 6.016874996s Mar 2 00:18:33.523: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 8.021342393s Mar 2 00:18:35.527: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 10.025982225s Mar 2 00:18:37.531: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 12.030002088s Mar 2 00:18:39.535: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 14.033241598s Mar 2 00:18:41.540: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 16.038286232s Mar 2 00:18:43.544: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 18.042943588s Mar 2 00:18:45.548: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 20.047028476s Mar 2 00:18:47.556: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 22.054417754s Mar 2 00:18:49.560: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 24.058860266s Mar 2 00:18:51.566: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 26.064214788s Mar 2 00:18:53.570: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 28.068563708s Mar 2 00:18:55.575: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 30.07335753s Mar 2 00:18:57.582: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 32.081010737s Mar 2 00:18:59.586: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 34.084639163s Mar 2 00:19:01.590: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Running", Reason="", readiness=true. Elapsed: 36.088716457s Mar 2 00:19:03.595: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 38.093961856s Mar 2 00:19:03.595: INFO: Pod "pod-update-activedeadlineseconds-25a940d1-2139-4633-9b22-316b6c858528" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:03.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6757" for this suite. • [SLOW TEST:88.656 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:57.685: INFO: >>> kubeConfig: /tmp/kubeconfig-1252640407 STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:18:57.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1252640407 [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:03.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8797" for this suite. • [SLOW TEST:6.183 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":4,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:34.494: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:18:34.518: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58" in namespace "projected-5196" to be "Succeeded or Failed" Mar 2 00:18:34.525: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.909656ms Mar 2 00:18:36.529: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011323844s Mar 2 00:18:38.534: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016588223s Mar 2 00:18:40.538: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020774715s Mar 2 00:18:42.544: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026526585s Mar 2 00:18:44.548: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03071708s Mar 2 00:18:46.553: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 12.034961064s Mar 2 00:18:48.558: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 14.04019811s Mar 2 00:18:50.563: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 16.045727389s Mar 2 00:18:52.568: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 18.050793868s Mar 2 00:18:54.573: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 20.054935107s Mar 2 00:18:56.576: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 22.058757108s Mar 2 00:18:58.582: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 24.064238407s Mar 2 00:19:00.586: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 26.068130385s Mar 2 00:19:02.589: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Pending", Reason="", readiness=false. Elapsed: 28.071440166s Mar 2 00:19:04.595: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.076891811s STEP: Saw pod success Mar 2 00:19:04.595: INFO: Pod "downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58" satisfied condition "Succeeded or Failed" Mar 2 00:19:04.598: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58 container client-container: STEP: delete the pod Mar 2 00:19:04.620: INFO: Waiting for pod downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58 to disappear Mar 2 00:19:04.626: INFO: Pod downwardapi-volume-c5620512-c923-4db1-b05d-44a16c346e58 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:04.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5196" for this suite. • [SLOW TEST:30.142 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":170,"failed":0} S ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:15.181: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating server pod server in namespace prestop-7055 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7055 STEP: Deleting pre-stop pod Mar 2 00:19:06.236: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:06.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7055" for this suite. • [SLOW TEST:51.075 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":5,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:07.098: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8248 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8248 I0302 00:17:07.153678 26 runners.go:193] Created replication controller with name: externalname-service, namespace: services-8248, replica count: 2 I0302 00:17:10.205199 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:13.205830 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:16.206896 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:19.206979 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:22.209059 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:25.209534 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:28.209844 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:31.210897 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:34.211064 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:37.213866 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:40.215225 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:43.215461 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:46.217681 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:49.217968 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:52.218965 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:55.219392 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:17:58.221500 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:01.221857 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:04.222901 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:07.223356 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:10.224994 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:13.227092 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:16.229284 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:19.229536 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:22.230240 26 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:18:22.230: INFO: Creating new exec pod Mar 2 00:19:07.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2977999873 --namespace=services-8248 exec execpodvrlk8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Mar 2 00:19:07.331: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Mar 2 00:19:07.331: INFO: stdout: "externalname-service-42494" Mar 2 00:19:07.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2977999873 --namespace=services-8248 exec execpodvrlk8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.218.13 80' Mar 2 00:19:07.451: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.218.13 80\nConnection to 10.43.218.13 80 port [tcp/http] succeeded!\n" Mar 2 00:19:07.451: INFO: stdout: "externalname-service-rjwtb" Mar 2 00:19:07.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2977999873 --namespace=services-8248 exec execpodvrlk8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.12 31978' Mar 2 00:19:07.586: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.12 31978\nConnection to 172.17.0.12 31978 port [tcp/*] succeeded!\n" Mar 2 00:19:07.586: INFO: stdout: "externalname-service-42494" Mar 2 00:19:07.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2977999873 --namespace=services-8248 exec execpodvrlk8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.8 31978' Mar 2 00:19:07.686: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.8 31978\nConnection to 172.17.0.8 31978 port [tcp/*] succeeded!\n" Mar 2 00:19:07.686: INFO: stdout: "externalname-service-42494" Mar 2 00:19:07.686: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:07.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8248" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:120.621 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":6,"skipped":168,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:07.725: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:07.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-607" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":7,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:41.963: INFO: >>> kubeConfig: /tmp/kubeconfig-2889160273 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name secret-test-map-f72e71e7-c076-4a63-9f56-6d8f05eeb3e6 STEP: Creating a pod to test consume secrets Mar 2 00:18:41.994: INFO: Waiting up to 5m0s for pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204" in namespace "secrets-7131" to be "Succeeded or Failed" Mar 2 00:18:41.997: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 2.842076ms Mar 2 00:18:44.000: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005673829s Mar 2 00:18:46.004: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009654891s Mar 2 00:18:48.009: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014467099s Mar 2 00:18:50.013: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018715224s Mar 2 00:18:52.017: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022973386s Mar 2 00:18:54.020: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02595924s Mar 2 00:18:56.024: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 14.029815033s Mar 2 00:18:58.028: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 16.034407542s Mar 2 00:19:00.040: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 18.045680841s Mar 2 00:19:02.044: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 20.049800277s Mar 2 00:19:04.047: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 22.053300255s Mar 2 00:19:06.053: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Pending", Reason="", readiness=false. Elapsed: 24.058703002s Mar 2 00:19:08.057: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.062638401s STEP: Saw pod success Mar 2 00:19:08.057: INFO: Pod "pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204" satisfied condition "Succeeded or Failed" Mar 2 00:19:08.060: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204 container secret-volume-test: STEP: delete the pod Mar 2 00:19:08.081: INFO: Waiting for pod pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204 to disappear Mar 2 00:19:08.087: INFO: Pod pod-secrets-bdd5c91f-5558-4ef8-a4c5-5d5154afd204 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:08.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7131" for this suite. • [SLOW TEST:26.133 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":143,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:31.332: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Mar 2 00:18:31.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2625432154 --namespace=kubectl-6835 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Mar 2 00:18:31.399: INFO: stderr: "" Mar 2 00:18:31.399: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Mar 2 00:18:31.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2625432154 --namespace=kubectl-6835 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' Mar 2 00:18:31.809: INFO: stderr: "" Mar 2 00:18:31.809: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Mar 2 00:18:31.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2625432154 --namespace=kubectl-6835 delete pods e2e-test-httpd-pod' Mar 2 00:19:10.695: INFO: stderr: "" Mar 2 00:19:10.695: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:10.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6835" for this suite. • [SLOW TEST:39.374 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:926 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":9,"skipped":218,"failed":0} SSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:03.884: INFO: >>> kubeConfig: /tmp/kubeconfig-1252640407 STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 2 00:19:03.907: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 2 00:19:03.913: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 2 00:19:03.913: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 2 00:19:03.921: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 2 00:19:03.921: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 2 00:19:03.930: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 2 00:19:03.930: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 2 00:19:10.958: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:10.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9926" for this suite. • [SLOW TEST:7.092 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":5,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:47.069: INFO: >>> kubeConfig: /tmp/kubeconfig-446130155 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating all guestbook components Mar 2 00:18:47.092: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Mar 2 00:18:47.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 create -f -' Mar 2 00:18:47.202: INFO: stderr: "" Mar 2 00:18:47.202: INFO: stdout: "service/agnhost-replica created\n" Mar 2 00:18:47.202: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Mar 2 00:18:47.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 create -f -' Mar 2 00:18:47.312: INFO: stderr: "" Mar 2 00:18:47.312: INFO: stdout: "service/agnhost-primary created\n" Mar 2 00:18:47.312: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 2 00:18:47.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 create -f -' Mar 2 00:18:47.423: INFO: stderr: "" Mar 2 00:18:47.423: INFO: stdout: "service/frontend created\n" Mar 2 00:18:47.424: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 2 00:18:47.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 create -f -' Mar 2 00:18:47.541: INFO: stderr: "" Mar 2 00:18:47.541: INFO: stdout: "deployment.apps/frontend created\n" Mar 2 00:18:47.541: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 2 00:18:47.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 create -f -' Mar 2 00:18:47.650: INFO: stderr: "" Mar 2 00:18:47.650: INFO: stdout: "deployment.apps/agnhost-primary created\n" Mar 2 00:18:47.650: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 2 00:18:47.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 create -f -' Mar 2 00:18:47.764: INFO: stderr: "" Mar 2 00:18:47.764: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Mar 2 00:18:47.764: INFO: Waiting for all frontend pods to be Running. Mar 2 00:19:12.818: INFO: Waiting for frontend to serve content. Mar 2 00:19:12.830: INFO: Trying to add a new entry to the guestbook. Mar 2 00:19:12.844: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 2 00:19:12.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 delete --grace-period=0 --force -f -' Mar 2 00:19:12.939: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 2 00:19:12.939: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Mar 2 00:19:12.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 delete --grace-period=0 --force -f -' Mar 2 00:19:13.025: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 2 00:19:13.025: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Mar 2 00:19:13.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 delete --grace-period=0 --force -f -' Mar 2 00:19:13.097: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 2 00:19:13.097: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 2 00:19:13.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 delete --grace-period=0 --force -f -' Mar 2 00:19:13.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 2 00:19:13.147: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 2 00:19:13.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 delete --grace-period=0 --force -f -' Mar 2 00:19:13.208: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 2 00:19:13.208: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Mar 2 00:19:13.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-446130155 --namespace=kubectl-3997 delete --grace-period=0 --force -f -' Mar 2 00:19:13.264: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 2 00:19:13.264: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:13.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3997" for this suite. • [SLOW TEST:26.215 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":3,"skipped":62,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":164,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:03.605: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0302 00:19:13.652806 259 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. Mar 2 00:19:13.652: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:13.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2536" for this suite. • [SLOW TEST:10.057 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":267,"failed":0} [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:01.702: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name secret-test-map-00698b22-2342-43c5-b42e-1bfb3eb0ec07 STEP: Creating a pod to test consume secrets Mar 2 00:19:01.732: INFO: Waiting up to 5m0s for pod "pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b" in namespace "secrets-9207" to be "Succeeded or Failed" Mar 2 00:19:01.735: INFO: Pod "pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.499202ms Mar 2 00:19:03.740: INFO: Pod "pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007935817s Mar 2 00:19:05.745: INFO: Pod "pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013459452s Mar 2 00:19:07.749: INFO: Pod "pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017285553s Mar 2 00:19:09.755: INFO: Pod "pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023001338s Mar 2 00:19:11.759: INFO: Pod "pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026834276s Mar 2 00:19:13.763: INFO: Pod "pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.030590636s STEP: Saw pod success Mar 2 00:19:13.763: INFO: Pod "pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b" satisfied condition "Succeeded or Failed" Mar 2 00:19:13.765: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b container secret-volume-test: STEP: delete the pod Mar 2 00:19:13.788: INFO: Waiting for pod pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b to disappear Mar 2 00:19:13.794: INFO: Pod pod-secrets-b47a1a6f-1b19-4f53-a813-0e2b300c868b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:13.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9207" for this suite. • [SLOW TEST:12.101 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:36.635: INFO: >>> kubeConfig: /tmp/kubeconfig-3226875953 STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod Mar 2 00:18:36.652: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:15.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8831" for this suite. • [SLOW TEST:38.466 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":4,"skipped":151,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:03.064: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 2 00:19:03.093: INFO: Waiting up to 5m0s for pod "pod-15984117-67ca-4f2c-b53d-f722b4a4c905" in namespace "emptydir-2851" to be "Succeeded or Failed" Mar 2 00:19:03.097: INFO: Pod "pod-15984117-67ca-4f2c-b53d-f722b4a4c905": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33531ms Mar 2 00:19:05.101: INFO: Pod "pod-15984117-67ca-4f2c-b53d-f722b4a4c905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007878263s Mar 2 00:19:07.109: INFO: Pod "pod-15984117-67ca-4f2c-b53d-f722b4a4c905": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01562325s Mar 2 00:19:09.115: INFO: Pod "pod-15984117-67ca-4f2c-b53d-f722b4a4c905": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021919856s Mar 2 00:19:11.119: INFO: Pod "pod-15984117-67ca-4f2c-b53d-f722b4a4c905": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026421411s Mar 2 00:19:13.124: INFO: Pod "pod-15984117-67ca-4f2c-b53d-f722b4a4c905": Phase="Pending", Reason="", readiness=false. Elapsed: 10.031086153s Mar 2 00:19:15.131: INFO: Pod "pod-15984117-67ca-4f2c-b53d-f722b4a4c905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.037952411s STEP: Saw pod success Mar 2 00:19:15.131: INFO: Pod "pod-15984117-67ca-4f2c-b53d-f722b4a4c905" satisfied condition "Succeeded or Failed" Mar 2 00:19:15.135: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-15984117-67ca-4f2c-b53d-f722b4a4c905 container test-container: STEP: delete the pod Mar 2 00:19:15.166: INFO: Waiting for pod pod-15984117-67ca-4f2c-b53d-f722b4a4c905 to disappear Mar 2 00:19:15.173: INFO: Pod pod-15984117-67ca-4f2c-b53d-f722b4a4c905 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:15.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2851" for this suite. • [SLOW TEST:12.118 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:15.193: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:15.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4632" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":6,"skipped":155,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:40.012: INFO: >>> kubeConfig: /tmp/kubeconfig-3654524011 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a replication controller Mar 2 00:18:40.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 create -f -' Mar 2 00:18:40.612: INFO: stderr: "" Mar 2 00:18:40.612: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 2 00:18:40.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:18:40.662: INFO: stderr: "" Mar 2 00:18:40.662: INFO: stdout: "update-demo-nautilus-qbc7w update-demo-nautilus-ptnt2 " Mar 2 00:18:40.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-qbc7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:18:40.704: INFO: stderr: "" Mar 2 00:18:40.704: INFO: stdout: "" Mar 2 00:18:40.704: INFO: update-demo-nautilus-qbc7w is created but not running Mar 2 00:18:45.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:18:45.748: INFO: stderr: "" Mar 2 00:18:45.748: INFO: stdout: "update-demo-nautilus-qbc7w update-demo-nautilus-ptnt2 " Mar 2 00:18:45.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-qbc7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:18:45.791: INFO: stderr: "" Mar 2 00:18:45.791: INFO: stdout: "" Mar 2 00:18:45.791: INFO: update-demo-nautilus-qbc7w is created but not running Mar 2 00:18:50.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:18:50.836: INFO: stderr: "" Mar 2 00:18:50.836: INFO: stdout: "update-demo-nautilus-qbc7w update-demo-nautilus-ptnt2 " Mar 2 00:18:50.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-qbc7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:18:50.878: INFO: stderr: "" Mar 2 00:18:50.878: INFO: stdout: "" Mar 2 00:18:50.878: INFO: update-demo-nautilus-qbc7w is created but not running Mar 2 00:18:55.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:18:55.922: INFO: stderr: "" Mar 2 00:18:55.922: INFO: stdout: "update-demo-nautilus-qbc7w update-demo-nautilus-ptnt2 " Mar 2 00:18:55.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-qbc7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:18:55.966: INFO: stderr: "" Mar 2 00:18:55.966: INFO: stdout: "" Mar 2 00:18:55.966: INFO: update-demo-nautilus-qbc7w is created but not running Mar 2 00:19:00.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:01.013: INFO: stderr: "" Mar 2 00:19:01.013: INFO: stdout: "update-demo-nautilus-qbc7w update-demo-nautilus-ptnt2 " Mar 2 00:19:01.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-qbc7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:01.056: INFO: stderr: "" Mar 2 00:19:01.056: INFO: stdout: "" Mar 2 00:19:01.056: INFO: update-demo-nautilus-qbc7w is created but not running Mar 2 00:19:06.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:06.101: INFO: stderr: "" Mar 2 00:19:06.101: INFO: stdout: "update-demo-nautilus-qbc7w update-demo-nautilus-ptnt2 " Mar 2 00:19:06.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-qbc7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:06.145: INFO: stderr: "" Mar 2 00:19:06.145: INFO: stdout: "" Mar 2 00:19:06.145: INFO: update-demo-nautilus-qbc7w is created but not running Mar 2 00:19:11.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:11.190: INFO: stderr: "" Mar 2 00:19:11.190: INFO: stdout: "update-demo-nautilus-qbc7w update-demo-nautilus-ptnt2 " Mar 2 00:19:11.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-qbc7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:11.234: INFO: stderr: "" Mar 2 00:19:11.234: INFO: stdout: "" Mar 2 00:19:11.234: INFO: update-demo-nautilus-qbc7w is created but not running Mar 2 00:19:16.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:16.284: INFO: stderr: "" Mar 2 00:19:16.284: INFO: stdout: "update-demo-nautilus-ptnt2 update-demo-nautilus-qbc7w " Mar 2 00:19:16.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-ptnt2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:16.330: INFO: stderr: "" Mar 2 00:19:16.330: INFO: stdout: "true" Mar 2 00:19:16.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-ptnt2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 2 00:19:16.379: INFO: stderr: "" Mar 2 00:19:16.379: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Mar 2 00:19:16.379: INFO: validating pod update-demo-nautilus-ptnt2 Mar 2 00:19:16.386: INFO: got data: { "image": "nautilus.jpg" } Mar 2 00:19:16.386: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 2 00:19:16.386: INFO: update-demo-nautilus-ptnt2 is verified up and running Mar 2 00:19:16.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-qbc7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:16.429: INFO: stderr: "" Mar 2 00:19:16.429: INFO: stdout: "true" Mar 2 00:19:16.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods update-demo-nautilus-qbc7w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 2 00:19:16.470: INFO: stderr: "" Mar 2 00:19:16.470: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Mar 2 00:19:16.470: INFO: validating pod update-demo-nautilus-qbc7w Mar 2 00:19:16.477: INFO: got data: { "image": "nautilus.jpg" } Mar 2 00:19:16.477: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 2 00:19:16.477: INFO: update-demo-nautilus-qbc7w is verified up and running STEP: using delete to clean up resources Mar 2 00:19:16.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 delete --grace-period=0 --force -f -' Mar 2 00:19:16.524: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 2 00:19:16.525: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 2 00:19:16.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get rc,svc -l name=update-demo --no-headers' Mar 2 00:19:16.586: INFO: stderr: "No resources found in kubectl-4390 namespace.\n" Mar 2 00:19:16.586: INFO: stdout: "" Mar 2 00:19:16.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3654524011 --namespace=kubectl-4390 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 2 00:19:16.635: INFO: stderr: "" Mar 2 00:19:16.635: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:16.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4390" for this suite. • [SLOW TEST:36.633 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":3,"skipped":110,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:15.295: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:17.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-5943" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":7,"skipped":159,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:17.396: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:17.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1384" for this suite. • ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:45.006: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod STEP: submitting the pod to kubernetes Mar 2 00:18:45.055: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:47.063: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:49.060: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:51.060: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:53.061: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:55.060: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:57.062: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:59.059: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:01.063: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:03.059: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:05.060: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:07.059: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:09.060: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:11.061: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:13.060: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:15.060: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:17.060: INFO: The status of Pod pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 2 00:19:17.574: INFO: Successfully updated pod "pod-update-7357c76e-8bab-44e8-aed9-c6d56ee067e0" STEP: verifying the updated pod is in kubernetes Mar 2 00:19:17.581: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:17.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4096" for this suite. • [SLOW TEST:32.590 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":234,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:00.244: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 2 00:19:18.377: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:18.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2450" for this suite. • [SLOW TEST:18.168 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":90,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:18.414: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:18.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7320" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":8,"skipped":93,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:10.709: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 2 00:19:18.782: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:18.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2278" for this suite. • [SLOW TEST:8.121 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":221,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:29.935: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 STEP: Creating service test in namespace statefulset-4632 [It] should list, patch and delete a collection of StatefulSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:18:29.974: INFO: Found 0 stateful pods, waiting for 1 Mar 2 00:18:39.982: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:18:49.979: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:18:59.982: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: patching the StatefulSet Mar 2 00:19:00.008: INFO: Found 1 stateful pods, waiting for 2 Mar 2 00:19:10.013: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:10.013: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:19:20.014: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:20.014: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true STEP: Listing all StatefulSets STEP: Delete all of the StatefulSets STEP: Verify that StatefulSets have been deleted [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Mar 2 00:19:20.037: INFO: Deleting all statefulset in ns statefulset-4632 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:20.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4632" for this suite. • [SLOW TEST:50.134 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 should list, patch and delete a collection of StatefulSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":7,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:12.731: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 STEP: create the container to handle the HTTPGet hook request. Mar 2 00:18:12.792: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:14.795: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:16.795: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:18.795: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:20.796: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:22.795: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:24.796: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:26.797: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:28.795: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:30.797: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:32.799: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:34.796: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:36.796: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:38.796: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:40.798: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the pod with lifecycle hook Mar 2 00:18:40.812: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:42.816: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:44.815: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:46.817: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:48.816: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:50.816: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:52.815: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:54.816: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:56.816: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:58.815: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:00.826: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:02.817: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Mar 2 00:19:02.827: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:02.830: INFO: Pod pod-with-prestop-exec-hook still exists Mar 2 00:19:04.831: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:04.835: INFO: Pod pod-with-prestop-exec-hook still exists Mar 2 00:19:06.831: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:06.835: INFO: Pod pod-with-prestop-exec-hook still exists Mar 2 00:19:08.831: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:08.835: INFO: Pod pod-with-prestop-exec-hook still exists Mar 2 00:19:10.832: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:10.835: INFO: Pod pod-with-prestop-exec-hook still exists Mar 2 00:19:12.832: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:12.837: INFO: Pod pod-with-prestop-exec-hook still exists Mar 2 00:19:14.831: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:14.838: INFO: Pod pod-with-prestop-exec-hook still exists Mar 2 00:19:16.831: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:16.836: INFO: Pod pod-with-prestop-exec-hook still exists Mar 2 00:19:18.831: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:18.837: INFO: Pod pod-with-prestop-exec-hook still exists Mar 2 00:19:20.832: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 2 00:19:20.842: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:20.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9123" for this suite. • [SLOW TEST:68.135 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":81,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:40.747: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 STEP: Creating service test in namespace statefulset-8003 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating statefulset ss in namespace statefulset-8003 Mar 2 00:18:40.783: INFO: Found 0 stateful pods, waiting for 1 Mar 2 00:18:50.786: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:19:00.788: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Mar 2 00:19:00.826: INFO: Deleting all statefulset in ns statefulset-8003 Mar 2 00:19:00.829: INFO: Scaling statefulset ss to 0 Mar 2 00:19:20.856: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:19:20.860: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:20.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8003" for this suite. • [SLOW TEST:40.165 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":5,"skipped":110,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:20.872: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:19:20.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1948352521 --namespace=kubectl-1655 version' Mar 2 00:19:20.945: INFO: stderr: "" Mar 2 00:19:20.945: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.17\", GitCommit:\"953be8927218ec8067e1af2641e540238ffd7576\", GitTreeState:\"clean\", BuildDate:\"2023-02-22T13:34:27Z\", GoVersion:\"go1.19.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.16+k3s1\", GitCommit:\"64b0feeb36c2a26976a364a110f23ebcf971f976\", GitTreeState:\"clean\", BuildDate:\"2023-01-26T00:09:39Z\", GoVersion:\"go1.19.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:20.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1655" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":7,"skipped":93,"failed":0} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:36.520: INFO: >>> kubeConfig: /tmp/kubeconfig-2645107774 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 STEP: creating an pod Mar 2 00:18:36.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2645107774 --namespace=kubectl-3769 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.39 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 2 00:18:36.604: INFO: stderr: "" Mar 2 00:18:36.604: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Waiting for log generator to start. Mar 2 00:18:36.604: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 2 00:18:36.604: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3769" to be "running and ready, or succeeded" Mar 2 00:18:36.612: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563236ms Mar 2 00:18:38.617: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012997901s Mar 2 00:18:40.625: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021058667s Mar 2 00:18:42.629: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025701843s Mar 2 00:18:44.634: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030381196s Mar 2 00:18:46.638: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.034320571s Mar 2 00:18:48.642: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.03844171s Mar 2 00:18:50.647: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 14.043411728s Mar 2 00:18:52.653: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 16.049121889s Mar 2 00:18:54.657: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 18.053652147s Mar 2 00:18:56.662: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 20.058569257s Mar 2 00:18:58.665: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 22.061708319s Mar 2 00:19:00.669: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 24.065723282s Mar 2 00:19:02.674: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 26.070230426s Mar 2 00:19:04.681: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 28.077500564s Mar 2 00:19:06.688: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 30.084665859s Mar 2 00:19:08.693: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 32.089556343s Mar 2 00:19:10.698: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 34.09421228s Mar 2 00:19:12.703: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 36.099296958s Mar 2 00:19:14.708: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 38.104735785s Mar 2 00:19:16.721: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 40.117743338s Mar 2 00:19:18.725: INFO: Pod "logs-generator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.12128575s Mar 2 00:19:18.725: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 2 00:19:18.725: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 2 00:19:18.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2645107774 --namespace=kubectl-3769 logs logs-generator logs-generator' Mar 2 00:19:18.776: INFO: stderr: "" Mar 2 00:19:18.776: INFO: stdout: "I0302 00:18:38.407660 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/z5s2 566\nI0302 00:18:38.608001 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/kjmf 252\nI0302 00:18:38.808311 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/ghm4 250\nI0302 00:18:39.008516 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/vkwp 456\nI0302 00:18:39.207762 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/5fv 594\nI0302 00:18:39.408054 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/bcwx 224\nI0302 00:18:39.608358 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/5kd 298\nI0302 00:18:39.808658 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/2qxr 588\nI0302 00:18:40.007862 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/fqc 359\nI0302 00:18:40.208169 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/g9m 379\nI0302 00:18:40.408476 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/z77 510\nI0302 00:18:40.607728 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/jk5 562\nI0302 00:18:40.808034 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/fd7 241\nI0302 00:18:41.008328 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/j4m 339\nI0302 00:18:41.208646 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/ghkp 445\nI0302 00:18:41.407901 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/wdk 290\nI0302 00:18:41.608204 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/srk 276\nI0302 00:18:41.808513 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/6zd 431\nI0302 00:18:42.007758 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/vj8h 484\nI0302 00:18:42.208057 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/29f 578\nI0302 00:18:42.408363 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/5b6 205\nI0302 00:18:42.608665 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/rx6p 545\nI0302 00:18:42.807970 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/cct 313\nI0302 00:18:43.008275 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/qjc 253\nI0302 00:18:43.208572 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/pb6t 554\nI0302 00:18:43.407819 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/chxv 266\nI0302 00:18:43.608143 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/vq8 305\nI0302 00:18:43.808436 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/4f2v 296\nI0302 00:18:44.008729 1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/nh9 304\nI0302 00:18:44.208025 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/vw6 358\nI0302 00:18:44.408342 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/fxfq 523\nI0302 00:18:44.608637 1 logs_generator.go:76] 31 POST /api/v1/namespaces/ns/pods/nqb 313\nI0302 00:18:44.807873 1 logs_generator.go:76] 32 POST /api/v1/namespaces/kube-system/pods/8xjp 258\nI0302 00:18:45.008166 1 logs_generator.go:76] 33 PUT /api/v1/namespaces/default/pods/knbx 439\nI0302 00:18:45.208474 1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/pvg 477\nI0302 00:18:45.407700 1 logs_generator.go:76] 35 POST /api/v1/namespaces/default/pods/jgd 433\nI0302 00:18:45.607988 1 logs_generator.go:76] 36 PUT /api/v1/namespaces/ns/pods/jmlb 405\nI0302 00:18:45.808282 1 logs_generator.go:76] 37 POST /api/v1/namespaces/kube-system/pods/snhl 406\nI0302 00:18:46.008590 1 logs_generator.go:76] 38 PUT /api/v1/namespaces/default/pods/gv4f 450\nI0302 00:18:46.207823 1 logs_generator.go:76] 39 POST /api/v1/namespaces/default/pods/dhj 475\nI0302 00:18:46.408125 1 logs_generator.go:76] 40 POST /api/v1/namespaces/ns/pods/ffc6 518\nI0302 00:18:46.608409 1 logs_generator.go:76] 41 GET /api/v1/namespaces/kube-system/pods/f2h 552\nI0302 00:18:46.808718 1 logs_generator.go:76] 42 POST /api/v1/namespaces/default/pods/t8m 336\nI0302 00:18:47.008015 1 logs_generator.go:76] 43 POST /api/v1/namespaces/default/pods/r7cc 531\nI0302 00:18:47.208306 1 logs_generator.go:76] 44 GET /api/v1/namespaces/ns/pods/htt 518\nI0302 00:18:47.408598 1 logs_generator.go:76] 45 POST /api/v1/namespaces/default/pods/fmw 537\nI0302 00:18:47.607842 1 logs_generator.go:76] 46 GET /api/v1/namespaces/default/pods/g6g 386\nI0302 00:18:47.808130 1 logs_generator.go:76] 47 PUT /api/v1/namespaces/default/pods/lnb 529\nI0302 00:18:48.008432 1 logs_generator.go:76] 48 PUT /api/v1/namespaces/default/pods/n8d8 306\nI0302 00:18:48.208728 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/kube-system/pods/wp4h 341\nI0302 00:18:48.408030 1 logs_generator.go:76] 50 PUT /api/v1/namespaces/ns/pods/rhlj 382\nI0302 00:18:48.608340 1 logs_generator.go:76] 51 GET /api/v1/namespaces/default/pods/v5l 474\nI0302 00:18:48.808634 1 logs_generator.go:76] 52 POST /api/v1/namespaces/kube-system/pods/q2j6 457\nI0302 00:18:49.007870 1 logs_generator.go:76] 53 POST /api/v1/namespaces/default/pods/zjp 473\nI0302 00:18:49.208170 1 logs_generator.go:76] 54 PUT /api/v1/namespaces/ns/pods/bzmv 411\nI0302 00:18:49.408473 1 logs_generator.go:76] 55 PUT /api/v1/namespaces/ns/pods/6v4m 568\nI0302 00:18:49.607714 1 logs_generator.go:76] 56 PUT /api/v1/namespaces/kube-system/pods/v258 497\nI0302 00:18:49.808138 1 logs_generator.go:76] 57 POST /api/v1/namespaces/default/pods/xlp6 434\nI0302 00:18:50.008446 1 logs_generator.go:76] 58 PUT /api/v1/namespaces/kube-system/pods/lz2h 449\nI0302 00:18:50.207690 1 logs_generator.go:76] 59 PUT /api/v1/namespaces/default/pods/hsk 228\nI0302 00:18:50.407997 1 logs_generator.go:76] 60 PUT /api/v1/namespaces/default/pods/2ksn 267\nI0302 00:18:50.608299 1 logs_generator.go:76] 61 POST /api/v1/namespaces/default/pods/q494 278\nI0302 00:18:50.808600 1 logs_generator.go:76] 62 GET /api/v1/namespaces/kube-system/pods/lph 295\nI0302 00:18:51.007850 1 logs_generator.go:76] 63 PUT /api/v1/namespaces/kube-system/pods/slv 278\nI0302 00:18:51.208150 1 logs_generator.go:76] 64 GET /api/v1/namespaces/ns/pods/xtl 551\nI0302 00:18:51.408454 1 logs_generator.go:76] 65 GET /api/v1/namespaces/default/pods/8b8g 469\nI0302 00:18:51.607689 1 logs_generator.go:76] 66 POST /api/v1/namespaces/ns/pods/b6cw 583\nI0302 00:18:51.807992 1 logs_generator.go:76] 67 GET /api/v1/namespaces/default/pods/xks 298\nI0302 00:18:52.008282 1 logs_generator.go:76] 68 POST /api/v1/namespaces/ns/pods/qhv 507\nI0302 00:18:52.208553 1 logs_generator.go:76] 69 GET /api/v1/namespaces/kube-system/pods/62n7 377\nI0302 00:18:52.407773 1 logs_generator.go:76] 70 PUT /api/v1/namespaces/ns/pods/mrvt 578\nI0302 00:18:52.608075 1 logs_generator.go:76] 71 PUT /api/v1/namespaces/kube-system/pods/9jg 524\nI0302 00:18:52.808359 1 logs_generator.go:76] 72 PUT /api/v1/namespaces/ns/pods/c7bj 300\nI0302 00:18:53.008660 1 logs_generator.go:76] 73 PUT /api/v1/namespaces/kube-system/pods/45c6 491\nI0302 00:18:53.207892 1 logs_generator.go:76] 74 POST /api/v1/namespaces/default/pods/2kc 413\nI0302 00:18:53.408173 1 logs_generator.go:76] 75 POST /api/v1/namespaces/default/pods/2bl 527\nI0302 00:18:53.608479 1 logs_generator.go:76] 76 GET /api/v1/namespaces/ns/pods/x6w9 337\nI0302 00:18:53.807714 1 logs_generator.go:76] 77 PUT /api/v1/namespaces/default/pods/bmqw 447\nI0302 00:18:54.008005 1 logs_generator.go:76] 78 POST /api/v1/namespaces/kube-system/pods/lwt 327\nI0302 00:18:54.208295 1 logs_generator.go:76] 79 GET /api/v1/namespaces/kube-system/pods/f4t 254\nI0302 00:18:54.408588 1 logs_generator.go:76] 80 GET /api/v1/namespaces/kube-system/pods/tw29 381\nI0302 00:18:54.607821 1 logs_generator.go:76] 81 PUT /api/v1/namespaces/default/pods/7kk 313\nI0302 00:18:54.808111 1 logs_generator.go:76] 82 GET /api/v1/namespaces/ns/pods/hr7m 519\nI0302 00:18:55.008412 1 logs_generator.go:76] 83 GET /api/v1/namespaces/kube-system/pods/h2z 467\nI0302 00:18:55.208711 1 logs_generator.go:76] 84 GET /api/v1/namespaces/kube-system/pods/68z2 223\nI0302 00:18:55.407997 1 logs_generator.go:76] 85 PUT /api/v1/namespaces/ns/pods/nlv 448\nI0302 00:18:55.608292 1 logs_generator.go:76] 86 PUT /api/v1/namespaces/ns/pods/g4q 213\nI0302 00:18:55.808597 1 logs_generator.go:76] 87 GET /api/v1/namespaces/ns/pods/m26 251\nI0302 00:18:56.007826 1 logs_generator.go:76] 88 POST /api/v1/namespaces/kube-system/pods/dmg 440\nI0302 00:18:56.208134 1 logs_generator.go:76] 89 POST /api/v1/namespaces/default/pods/qg5 560\nI0302 00:18:56.408419 1 logs_generator.go:76] 90 GET /api/v1/namespaces/kube-system/pods/92z 316\nI0302 00:18:56.608715 1 logs_generator.go:76] 91 GET /api/v1/namespaces/kube-system/pods/xkz 520\nI0302 00:18:56.808020 1 logs_generator.go:76] 92 POST /api/v1/namespaces/ns/pods/b67h 584\nI0302 00:18:57.008337 1 logs_generator.go:76] 93 PUT /api/v1/namespaces/default/pods/8lfl 341\nI0302 00:18:57.208581 1 logs_generator.go:76] 94 POST /api/v1/namespaces/kube-system/pods/dhrk 369\nI0302 00:18:57.407821 1 logs_generator.go:76] 95 POST /api/v1/namespaces/default/pods/cj6l 360\nI0302 00:18:57.608110 1 logs_generator.go:76] 96 POST /api/v1/namespaces/kube-system/pods/p8c2 549\nI0302 00:18:57.808405 1 logs_generator.go:76] 97 POST /api/v1/namespaces/kube-system/pods/6j5 381\nI0302 00:18:58.008699 1 logs_generator.go:76] 98 POST /api/v1/namespaces/ns/pods/79p 431\nI0302 00:18:58.207995 1 logs_generator.go:76] 99 GET /api/v1/namespaces/kube-system/pods/745 443\n" STEP: limiting log lines Mar 2 00:19:18.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2645107774 --namespace=kubectl-3769 logs logs-generator logs-generator --tail=1' Mar 2 00:19:18.839: INFO: stderr: "" Mar 2 00:19:18.839: INFO: stdout: "I0302 00:18:58.207995 1 logs_generator.go:76] 99 GET /api/v1/namespaces/kube-system/pods/745 443\n" Mar 2 00:19:18.839: INFO: got output "I0302 00:18:58.207995 1 logs_generator.go:76] 99 GET /api/v1/namespaces/kube-system/pods/745 443\n" STEP: limiting log bytes Mar 2 00:19:18.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2645107774 --namespace=kubectl-3769 logs logs-generator logs-generator --limit-bytes=1' Mar 2 00:19:18.910: INFO: stderr: "" Mar 2 00:19:18.910: INFO: stdout: "I" Mar 2 00:19:18.910: INFO: got output "I" STEP: exposing timestamps Mar 2 00:19:18.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2645107774 --namespace=kubectl-3769 logs logs-generator logs-generator --tail=1 --timestamps' Mar 2 00:19:18.978: INFO: stderr: "" Mar 2 00:19:18.978: INFO: stdout: "2023-03-02T00:18:58.208060995Z I0302 00:18:58.207995 1 logs_generator.go:76] 99 GET /api/v1/namespaces/kube-system/pods/745 443\n" Mar 2 00:19:18.978: INFO: got output "2023-03-02T00:18:58.208060995Z I0302 00:18:58.207995 1 logs_generator.go:76] 99 GET /api/v1/namespaces/kube-system/pods/745 443\n" STEP: restricting to a time range Mar 2 00:19:21.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2645107774 --namespace=kubectl-3769 logs logs-generator logs-generator --since=1s' Mar 2 00:19:21.535: INFO: stderr: "" Mar 2 00:19:21.535: INFO: stdout: "" Mar 2 00:19:21.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2645107774 --namespace=kubectl-3769 logs logs-generator logs-generator --since=24h' Mar 2 00:19:21.587: INFO: stderr: "" Mar 2 00:19:21.587: INFO: stdout: "I0302 00:18:38.407660 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/z5s2 566\nI0302 00:18:38.608001 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/kjmf 252\nI0302 00:18:38.808311 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/ghm4 250\nI0302 00:18:39.008516 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/vkwp 456\nI0302 00:18:39.207762 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/5fv 594\nI0302 00:18:39.408054 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/bcwx 224\nI0302 00:18:39.608358 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/5kd 298\nI0302 00:18:39.808658 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/2qxr 588\nI0302 00:18:40.007862 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/fqc 359\nI0302 00:18:40.208169 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/g9m 379\nI0302 00:18:40.408476 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/z77 510\nI0302 00:18:40.607728 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/jk5 562\nI0302 00:18:40.808034 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/fd7 241\nI0302 00:18:41.008328 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/j4m 339\nI0302 00:18:41.208646 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/ghkp 445\nI0302 00:18:41.407901 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/wdk 290\nI0302 00:18:41.608204 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/srk 276\nI0302 00:18:41.808513 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/6zd 431\nI0302 00:18:42.007758 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/vj8h 484\nI0302 00:18:42.208057 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/29f 578\nI0302 00:18:42.408363 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/5b6 205\nI0302 00:18:42.608665 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/rx6p 545\nI0302 00:18:42.807970 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/cct 313\nI0302 00:18:43.008275 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/qjc 253\nI0302 00:18:43.208572 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/pb6t 554\nI0302 00:18:43.407819 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/chxv 266\nI0302 00:18:43.608143 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/vq8 305\nI0302 00:18:43.808436 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/4f2v 296\nI0302 00:18:44.008729 1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/nh9 304\nI0302 00:18:44.208025 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/vw6 358\nI0302 00:18:44.408342 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/fxfq 523\nI0302 00:18:44.608637 1 logs_generator.go:76] 31 POST /api/v1/namespaces/ns/pods/nqb 313\nI0302 00:18:44.807873 1 logs_generator.go:76] 32 POST /api/v1/namespaces/kube-system/pods/8xjp 258\nI0302 00:18:45.008166 1 logs_generator.go:76] 33 PUT /api/v1/namespaces/default/pods/knbx 439\nI0302 00:18:45.208474 1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/pvg 477\nI0302 00:18:45.407700 1 logs_generator.go:76] 35 POST /api/v1/namespaces/default/pods/jgd 433\nI0302 00:18:45.607988 1 logs_generator.go:76] 36 PUT /api/v1/namespaces/ns/pods/jmlb 405\nI0302 00:18:45.808282 1 logs_generator.go:76] 37 POST /api/v1/namespaces/kube-system/pods/snhl 406\nI0302 00:18:46.008590 1 logs_generator.go:76] 38 PUT /api/v1/namespaces/default/pods/gv4f 450\nI0302 00:18:46.207823 1 logs_generator.go:76] 39 POST /api/v1/namespaces/default/pods/dhj 475\nI0302 00:18:46.408125 1 logs_generator.go:76] 40 POST /api/v1/namespaces/ns/pods/ffc6 518\nI0302 00:18:46.608409 1 logs_generator.go:76] 41 GET /api/v1/namespaces/kube-system/pods/f2h 552\nI0302 00:18:46.808718 1 logs_generator.go:76] 42 POST /api/v1/namespaces/default/pods/t8m 336\nI0302 00:18:47.008015 1 logs_generator.go:76] 43 POST /api/v1/namespaces/default/pods/r7cc 531\nI0302 00:18:47.208306 1 logs_generator.go:76] 44 GET /api/v1/namespaces/ns/pods/htt 518\nI0302 00:18:47.408598 1 logs_generator.go:76] 45 POST /api/v1/namespaces/default/pods/fmw 537\nI0302 00:18:47.607842 1 logs_generator.go:76] 46 GET /api/v1/namespaces/default/pods/g6g 386\nI0302 00:18:47.808130 1 logs_generator.go:76] 47 PUT /api/v1/namespaces/default/pods/lnb 529\nI0302 00:18:48.008432 1 logs_generator.go:76] 48 PUT /api/v1/namespaces/default/pods/n8d8 306\nI0302 00:18:48.208728 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/kube-system/pods/wp4h 341\nI0302 00:18:48.408030 1 logs_generator.go:76] 50 PUT /api/v1/namespaces/ns/pods/rhlj 382\nI0302 00:18:48.608340 1 logs_generator.go:76] 51 GET /api/v1/namespaces/default/pods/v5l 474\nI0302 00:18:48.808634 1 logs_generator.go:76] 52 POST /api/v1/namespaces/kube-system/pods/q2j6 457\nI0302 00:18:49.007870 1 logs_generator.go:76] 53 POST /api/v1/namespaces/default/pods/zjp 473\nI0302 00:18:49.208170 1 logs_generator.go:76] 54 PUT /api/v1/namespaces/ns/pods/bzmv 411\nI0302 00:18:49.408473 1 logs_generator.go:76] 55 PUT /api/v1/namespaces/ns/pods/6v4m 568\nI0302 00:18:49.607714 1 logs_generator.go:76] 56 PUT /api/v1/namespaces/kube-system/pods/v258 497\nI0302 00:18:49.808138 1 logs_generator.go:76] 57 POST /api/v1/namespaces/default/pods/xlp6 434\nI0302 00:18:50.008446 1 logs_generator.go:76] 58 PUT /api/v1/namespaces/kube-system/pods/lz2h 449\nI0302 00:18:50.207690 1 logs_generator.go:76] 59 PUT /api/v1/namespaces/default/pods/hsk 228\nI0302 00:18:50.407997 1 logs_generator.go:76] 60 PUT /api/v1/namespaces/default/pods/2ksn 267\nI0302 00:18:50.608299 1 logs_generator.go:76] 61 POST /api/v1/namespaces/default/pods/q494 278\nI0302 00:18:50.808600 1 logs_generator.go:76] 62 GET /api/v1/namespaces/kube-system/pods/lph 295\nI0302 00:18:51.007850 1 logs_generator.go:76] 63 PUT /api/v1/namespaces/kube-system/pods/slv 278\nI0302 00:18:51.208150 1 logs_generator.go:76] 64 GET /api/v1/namespaces/ns/pods/xtl 551\nI0302 00:18:51.408454 1 logs_generator.go:76] 65 GET /api/v1/namespaces/default/pods/8b8g 469\nI0302 00:18:51.607689 1 logs_generator.go:76] 66 POST /api/v1/namespaces/ns/pods/b6cw 583\nI0302 00:18:51.807992 1 logs_generator.go:76] 67 GET /api/v1/namespaces/default/pods/xks 298\nI0302 00:18:52.008282 1 logs_generator.go:76] 68 POST /api/v1/namespaces/ns/pods/qhv 507\nI0302 00:18:52.208553 1 logs_generator.go:76] 69 GET /api/v1/namespaces/kube-system/pods/62n7 377\nI0302 00:18:52.407773 1 logs_generator.go:76] 70 PUT /api/v1/namespaces/ns/pods/mrvt 578\nI0302 00:18:52.608075 1 logs_generator.go:76] 71 PUT /api/v1/namespaces/kube-system/pods/9jg 524\nI0302 00:18:52.808359 1 logs_generator.go:76] 72 PUT /api/v1/namespaces/ns/pods/c7bj 300\nI0302 00:18:53.008660 1 logs_generator.go:76] 73 PUT /api/v1/namespaces/kube-system/pods/45c6 491\nI0302 00:18:53.207892 1 logs_generator.go:76] 74 POST /api/v1/namespaces/default/pods/2kc 413\nI0302 00:18:53.408173 1 logs_generator.go:76] 75 POST /api/v1/namespaces/default/pods/2bl 527\nI0302 00:18:53.608479 1 logs_generator.go:76] 76 GET /api/v1/namespaces/ns/pods/x6w9 337\nI0302 00:18:53.807714 1 logs_generator.go:76] 77 PUT /api/v1/namespaces/default/pods/bmqw 447\nI0302 00:18:54.008005 1 logs_generator.go:76] 78 POST /api/v1/namespaces/kube-system/pods/lwt 327\nI0302 00:18:54.208295 1 logs_generator.go:76] 79 GET /api/v1/namespaces/kube-system/pods/f4t 254\nI0302 00:18:54.408588 1 logs_generator.go:76] 80 GET /api/v1/namespaces/kube-system/pods/tw29 381\nI0302 00:18:54.607821 1 logs_generator.go:76] 81 PUT /api/v1/namespaces/default/pods/7kk 313\nI0302 00:18:54.808111 1 logs_generator.go:76] 82 GET /api/v1/namespaces/ns/pods/hr7m 519\nI0302 00:18:55.008412 1 logs_generator.go:76] 83 GET /api/v1/namespaces/kube-system/pods/h2z 467\nI0302 00:18:55.208711 1 logs_generator.go:76] 84 GET /api/v1/namespaces/kube-system/pods/68z2 223\nI0302 00:18:55.407997 1 logs_generator.go:76] 85 PUT /api/v1/namespaces/ns/pods/nlv 448\nI0302 00:18:55.608292 1 logs_generator.go:76] 86 PUT /api/v1/namespaces/ns/pods/g4q 213\nI0302 00:18:55.808597 1 logs_generator.go:76] 87 GET /api/v1/namespaces/ns/pods/m26 251\nI0302 00:18:56.007826 1 logs_generator.go:76] 88 POST /api/v1/namespaces/kube-system/pods/dmg 440\nI0302 00:18:56.208134 1 logs_generator.go:76] 89 POST /api/v1/namespaces/default/pods/qg5 560\nI0302 00:18:56.408419 1 logs_generator.go:76] 90 GET /api/v1/namespaces/kube-system/pods/92z 316\nI0302 00:18:56.608715 1 logs_generator.go:76] 91 GET /api/v1/namespaces/kube-system/pods/xkz 520\nI0302 00:18:56.808020 1 logs_generator.go:76] 92 POST /api/v1/namespaces/ns/pods/b67h 584\nI0302 00:18:57.008337 1 logs_generator.go:76] 93 PUT /api/v1/namespaces/default/pods/8lfl 341\nI0302 00:18:57.208581 1 logs_generator.go:76] 94 POST /api/v1/namespaces/kube-system/pods/dhrk 369\nI0302 00:18:57.407821 1 logs_generator.go:76] 95 POST /api/v1/namespaces/default/pods/cj6l 360\nI0302 00:18:57.608110 1 logs_generator.go:76] 96 POST /api/v1/namespaces/kube-system/pods/p8c2 549\nI0302 00:18:57.808405 1 logs_generator.go:76] 97 POST /api/v1/namespaces/kube-system/pods/6j5 381\nI0302 00:18:58.008699 1 logs_generator.go:76] 98 POST /api/v1/namespaces/ns/pods/79p 431\nI0302 00:18:58.207995 1 logs_generator.go:76] 99 GET /api/v1/namespaces/kube-system/pods/745 443\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 Mar 2 00:19:21.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2645107774 --namespace=kubectl-3769 delete pod logs-generator' Mar 2 00:19:21.648: INFO: stderr: "" Mar 2 00:19:21.648: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:21.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3769" for this suite. • [SLOW TEST:45.140 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1408 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:17.597: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:19:17.613: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 2 00:19:19.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-1303 --namespace=crd-publish-openapi-1303 create -f -' Mar 2 00:19:19.485: INFO: stderr: "" Mar 2 00:19:19.485: INFO: stdout: "e2e-test-crd-publish-openapi-4341-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 2 00:19:19.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-1303 --namespace=crd-publish-openapi-1303 delete e2e-test-crd-publish-openapi-4341-crds test-cr' Mar 2 00:19:19.534: INFO: stderr: "" Mar 2 00:19:19.534: INFO: stdout: "e2e-test-crd-publish-openapi-4341-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 2 00:19:19.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-1303 --namespace=crd-publish-openapi-1303 apply -f -' Mar 2 00:19:19.650: INFO: stderr: "" Mar 2 00:19:19.650: INFO: stdout: "e2e-test-crd-publish-openapi-4341-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 2 00:19:19.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-1303 --namespace=crd-publish-openapi-1303 delete e2e-test-crd-publish-openapi-4341-crds test-cr' Mar 2 00:19:19.698: INFO: stderr: "" Mar 2 00:19:19.698: INFO: stdout: "e2e-test-crd-publish-openapi-4341-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 2 00:19:19.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4203063242 --namespace=crd-publish-openapi-1303 explain e2e-test-crd-publish-openapi-4341-crds' Mar 2 00:19:19.802: INFO: stderr: "" Mar 2 00:19:19.802: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4341-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:21.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1303" for this suite. •S ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":11,"skipped":236,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:53.824: INFO: >>> kubeConfig: /tmp/kubeconfig-3970272666 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-test-upd-549933a6-514e-4f6c-90cb-c9601ec76f66 STEP: Creating the pod Mar 2 00:16:53.852: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:55.859: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:57.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:16:59.855: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:01.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:03.855: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:05.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:07.858: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:09.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:11.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:13.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:15.860: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:17.857: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:19.855: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:21.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:23.855: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:25.857: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:27.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:29.855: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:31.858: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:33.855: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:35.857: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:37.859: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:39.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:41.857: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:43.866: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:45.859: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:47.858: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:49.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:51.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:53.855: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:55.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:57.857: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:59.855: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:01.858: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:03.855: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:05.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:07.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:09.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:11.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:13.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:15.857: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:17.858: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:19.857: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:21.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:23.856: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:25.857: INFO: The status of Pod pod-configmaps-4cf9468b-7ece-4cc9-8704-0e08b38426d4 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-549933a6-514e-4f6c-90cb-c9601ec76f66 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:22.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-733" for this suite. • [SLOW TEST:148.312 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":8,"skipped":175,"failed":0} [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:17.458: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: mirroring a new custom Endpoint Mar 2 00:19:17.502: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint Mar 2 00:19:19.512: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Mar 2 00:19:21.527: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:23.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-7655" for this suite. • [SLOW TEST:6.085 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":9,"skipped":175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:46.172: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod Mar 2 00:18:46.189: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:23.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6989" for this suite. • [SLOW TEST:37.526 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":6,"skipped":124,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:55.784: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating projection with secret that has name projected-secret-test-ecdea4d1-fab4-4d3b-99f0-a7b2f2ab3a69 STEP: Creating a pod to test consume secrets Mar 2 00:18:55.833: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201" in namespace "projected-4401" to be "Succeeded or Failed" Mar 2 00:18:55.839: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 5.517674ms Mar 2 00:18:57.843: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00989834s Mar 2 00:18:59.848: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014928427s Mar 2 00:19:01.853: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019649495s Mar 2 00:19:03.857: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023332238s Mar 2 00:19:05.861: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027711081s Mar 2 00:19:07.866: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 12.032682015s Mar 2 00:19:09.871: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 14.03709087s Mar 2 00:19:11.877: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 16.043439414s Mar 2 00:19:13.893: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 18.059159105s Mar 2 00:19:15.898: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 20.064997375s Mar 2 00:19:17.902: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 22.068990229s Mar 2 00:19:19.907: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 24.07318838s Mar 2 00:19:21.911: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Pending", Reason="", readiness=false. Elapsed: 26.077761841s Mar 2 00:19:23.919: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.085578556s STEP: Saw pod success Mar 2 00:19:23.919: INFO: Pod "pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201" satisfied condition "Succeeded or Failed" Mar 2 00:19:23.923: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201 container projected-secret-volume-test: STEP: delete the pod Mar 2 00:19:23.955: INFO: Waiting for pod pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201 to disappear Mar 2 00:19:23.960: INFO: Pod pod-projected-secrets-e84d83ea-e55e-4de6-8321-ba0d56857201 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:23.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4401" for this suite. • [SLOW TEST:28.186 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":8,"skipped":164,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:13.663: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: set up a multi version CRD Mar 2 00:19:13.679: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:25.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9044" for this suite. • [SLOW TEST:12.208 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":9,"skipped":164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:02.348: INFO: >>> kubeConfig: /tmp/kubeconfig-1722668237 STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:26.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7859" for this suite. • [SLOW TEST:143.804 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:24.805: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod liveness-5eb30b52-1026-42cd-9f18-5869407cb659 in namespace container-probe-4531 Mar 2 00:16:40.827: INFO: Started pod liveness-5eb30b52-1026-42cd-9f18-5869407cb659 in namespace container-probe-4531 STEP: checking the pod's current state and verifying that restartCount is present Mar 2 00:16:40.829: INFO: Initial restart count of pod liveness-5eb30b52-1026-42cd-9f18-5869407cb659 is 0 Mar 2 00:17:02.877: INFO: Restart count of pod container-probe-4531/liveness-5eb30b52-1026-42cd-9f18-5869407cb659 is now 1 (22.047343455s elapsed) Mar 2 00:17:28.937: INFO: Restart count of pod container-probe-4531/liveness-5eb30b52-1026-42cd-9f18-5869407cb659 is now 2 (48.107771301s elapsed) Mar 2 00:17:34.950: INFO: Restart count of pod container-probe-4531/liveness-5eb30b52-1026-42cd-9f18-5869407cb659 is now 3 (54.121080323s elapsed) Mar 2 00:18:49.118: INFO: Restart count of pod container-probe-4531/liveness-5eb30b52-1026-42cd-9f18-5869407cb659 is now 4 (2m8.289103915s elapsed) Mar 2 00:19:27.219: INFO: Restart count of pod container-probe-4531/liveness-5eb30b52-1026-42cd-9f18-5869407cb659 is now 5 (2m46.389341598s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:27.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4531" for this suite. • [SLOW TEST:182.507 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:59.387: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-test-volume-87f9c85a-6480-4fed-b1dc-c865c841684d STEP: Creating a pod to test consume configMaps Mar 2 00:18:59.425: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465" in namespace "configmap-8344" to be "Succeeded or Failed" Mar 2 00:18:59.430: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 4.90981ms Mar 2 00:19:01.434: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009242249s Mar 2 00:19:03.439: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014143564s Mar 2 00:19:05.443: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018035982s Mar 2 00:19:07.447: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022132735s Mar 2 00:19:09.450: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025442622s Mar 2 00:19:11.455: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029647315s Mar 2 00:19:13.460: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 14.035432114s Mar 2 00:19:15.465: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 16.039598879s Mar 2 00:19:17.470: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 18.044940773s Mar 2 00:19:19.475: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 20.049561936s Mar 2 00:19:21.480: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 22.054572414s Mar 2 00:19:23.492: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 24.067048844s Mar 2 00:19:25.498: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Pending", Reason="", readiness=false. Elapsed: 26.072818528s Mar 2 00:19:27.503: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.078013418s STEP: Saw pod success Mar 2 00:19:27.503: INFO: Pod "pod-configmaps-0a49364d-7dab-4051-8203-317b235df465" satisfied condition "Succeeded or Failed" Mar 2 00:19:27.511: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-configmaps-0a49364d-7dab-4051-8203-317b235df465 container agnhost-container: STEP: delete the pod Mar 2 00:19:27.554: INFO: Waiting for pod pod-configmaps-0a49364d-7dab-4051-8203-317b235df465 to disappear Mar 2 00:19:27.576: INFO: Pod pod-configmaps-0a49364d-7dab-4051-8203-317b235df465 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:27.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8344" for this suite. • [SLOW TEST:28.214 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:27.618: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:27.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6755" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":8,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:08.102: INFO: >>> kubeConfig: /tmp/kubeconfig-2889160273 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 2 00:19:08.133: INFO: Waiting up to 5m0s for pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467" in namespace "emptydir-7050" to be "Succeeded or Failed" Mar 2 00:19:08.139: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 5.871687ms Mar 2 00:19:10.145: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011697823s Mar 2 00:19:12.149: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015273173s Mar 2 00:19:14.152: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018612593s Mar 2 00:19:16.158: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024085952s Mar 2 00:19:18.162: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028074619s Mar 2 00:19:20.166: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 12.032336101s Mar 2 00:19:22.171: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 14.03764022s Mar 2 00:19:24.177: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 16.043376506s Mar 2 00:19:26.187: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Pending", Reason="", readiness=false. Elapsed: 18.053553229s Mar 2 00:19:28.193: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.059472197s STEP: Saw pod success Mar 2 00:19:28.193: INFO: Pod "pod-52f51db2-e1f5-4041-8f8c-152a27eb7467" satisfied condition "Succeeded or Failed" Mar 2 00:19:28.197: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-52f51db2-e1f5-4041-8f8c-152a27eb7467 container test-container: STEP: delete the pod Mar 2 00:19:28.220: INFO: Waiting for pod pod-52f51db2-e1f5-4041-8f8c-152a27eb7467 to disappear Mar 2 00:19:28.226: INFO: Pod pod-52f51db2-e1f5-4041-8f8c-152a27eb7467 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:28.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7050" for this suite. • [SLOW TEST:20.136 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:54.738: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Mar 2 00:18:55.148: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:18:55.164: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:18:57.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:59.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:01.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:03.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:05.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:07.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:09.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:11.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:13.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:15.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:17.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:19.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:21.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:23.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:25.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 55, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:19:28.256: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a validating webhook configuration Mar 2 00:19:28.301: INFO: Waiting for webhook configuration to be ready... STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:28.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8881" for this suite. STEP: Destroying namespace "webhook-8881-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:33.901 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":5,"skipped":96,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:04.638: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap configmap-2703/configmap-test-65b5cd35-80bf-46f1-a8dc-f57618990a0b STEP: Creating a pod to test consume configMaps Mar 2 00:19:04.705: INFO: Waiting up to 5m0s for pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12" in namespace "configmap-2703" to be "Succeeded or Failed" Mar 2 00:19:04.711: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138072ms Mar 2 00:19:06.724: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019072109s Mar 2 00:19:08.728: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0230336s Mar 2 00:19:10.734: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029490936s Mar 2 00:19:12.739: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034361328s Mar 2 00:19:14.746: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 10.04108269s Mar 2 00:19:16.749: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 12.044731476s Mar 2 00:19:18.754: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 14.049421197s Mar 2 00:19:20.759: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 16.054023358s Mar 2 00:19:22.765: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 18.060671921s Mar 2 00:19:24.770: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 20.065258206s Mar 2 00:19:26.775: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 22.070655839s Mar 2 00:19:28.793: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 24.088316058s Mar 2 00:19:30.797: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.092043635s STEP: Saw pod success Mar 2 00:19:30.797: INFO: Pod "pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12" satisfied condition "Succeeded or Failed" Mar 2 00:19:30.800: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12 container env-test: STEP: delete the pod Mar 2 00:19:30.822: INFO: Waiting for pod pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12 to disappear Mar 2 00:19:30.827: INFO: Pod pod-configmaps-04cdeacb-78a1-41f8-b446-efa80026bc12 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:30.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2703" for this suite. • [SLOW TEST:26.201 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":171,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:30.845: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: validating cluster-info Mar 2 00:19:30.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3274437701 --namespace=kubectl-7712 cluster-info' Mar 2 00:19:30.918: INFO: stderr: "" Mar 2 00:19:30.918: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.43.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:30.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7712" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":10,"skipped":181,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:52.899: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward api env vars Mar 2 00:18:52.930: INFO: Waiting up to 5m0s for pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7" in namespace "downward-api-507" to be "Succeeded or Failed" Mar 2 00:18:52.934: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228247ms Mar 2 00:18:54.938: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008324021s Mar 2 00:18:56.942: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012529602s Mar 2 00:18:58.948: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018016892s Mar 2 00:19:00.953: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023487369s Mar 2 00:19:02.959: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028902638s Mar 2 00:19:04.962: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.032827667s Mar 2 00:19:06.967: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.037363481s Mar 2 00:19:08.971: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.041444851s Mar 2 00:19:10.977: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.047447204s Mar 2 00:19:12.985: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.054967367s Mar 2 00:19:14.989: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.059109093s Mar 2 00:19:16.993: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.063232381s Mar 2 00:19:19.005: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.075458589s Mar 2 00:19:21.015: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.084973267s Mar 2 00:19:23.019: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.089278493s Mar 2 00:19:25.023: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.093255171s Mar 2 00:19:27.030: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.100343855s Mar 2 00:19:29.048: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.118094918s Mar 2 00:19:31.053: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.123512784s STEP: Saw pod success Mar 2 00:19:31.053: INFO: Pod "downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7" satisfied condition "Succeeded or Failed" Mar 2 00:19:31.056: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7 container dapi-container: STEP: delete the pod Mar 2 00:19:31.075: INFO: Waiting for pod downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7 to disappear Mar 2 00:19:31.080: INFO: Pod downward-api-3d6892f7-70b6-4ff2-b050-4eb7421434d7 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:31.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-507" for this suite. • [SLOW TEST:38.195 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":244,"failed":0} S ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:31.096: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap that has name configmap-test-emptyKey-83536bb9-bee7-4930-9dec-674b98ae39fc [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:31.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8074" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":11,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:07.761: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating pod Mar 2 00:19:07.790: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:09.794: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:11.796: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:13.795: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:15.794: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:17.794: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:19.793: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:21.807: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:23.797: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:25.794: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:27.803: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:29.805: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:31.793: INFO: The status of Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 is Running (Ready = true) Mar 2 00:19:31.800: INFO: Pod pod-hostip-39624d5e-da8c-4d60-8f6c-f8f1583621c2 has hostIP: 172.17.0.12 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:31.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6795" for this suite. • [SLOW TEST:24.047 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:23.995: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name projected-configmap-test-volume-map-42cf6e7c-bc9c-4c6f-a51a-d162722daff1 STEP: Creating a pod to test consume configMaps Mar 2 00:19:24.032: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca" in namespace "projected-6227" to be "Succeeded or Failed" Mar 2 00:19:24.036: INFO: Pod "pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.664585ms Mar 2 00:19:26.043: INFO: Pod "pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010974167s Mar 2 00:19:28.047: INFO: Pod "pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015081304s Mar 2 00:19:30.052: INFO: Pod "pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020272987s Mar 2 00:19:32.057: INFO: Pod "pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02578201s STEP: Saw pod success Mar 2 00:19:32.057: INFO: Pod "pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca" satisfied condition "Succeeded or Failed" Mar 2 00:19:32.062: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca container agnhost-container: STEP: delete the pod Mar 2 00:19:32.084: INFO: Waiting for pod pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca to disappear Mar 2 00:19:32.092: INFO: Pod pod-projected-configmaps-051b6c61-35c3-40b2-9a7e-4db743af54ca no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:32.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6227" for this suite. • [SLOW TEST:8.108 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:32.128: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Create set of pod templates Mar 2 00:19:32.156: INFO: created test-podtemplate-1 Mar 2 00:19:32.161: INFO: created test-podtemplate-2 Mar 2 00:19:32.168: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Mar 2 00:19:32.173: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Mar 2 00:19:32.201: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:32.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7088" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":8,"skipped":198,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:32.815: INFO: >>> kubeConfig: /tmp/kubeconfig-2274336506 STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 STEP: Creating service test in namespace statefulset-7090 [It] should validate Statefulset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating statefulset ss in namespace statefulset-7090 Mar 2 00:18:32.865: INFO: Found 0 stateful pods, waiting for 1 Mar 2 00:18:42.870: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:18:52.873: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:19:02.870: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Patch Statefulset to include a label STEP: Getting /status Mar 2 00:19:02.888: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) STEP: updating the StatefulSet Status Mar 2 00:19:02.897: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the statefulset status to be updated Mar 2 00:19:02.898: INFO: Observed &StatefulSet event: ADDED Mar 2 00:19:02.898: INFO: Found Statefulset ss in namespace statefulset-7090 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Mar 2 00:19:02.898: INFO: Statefulset ss has an updated status STEP: patching the Statefulset Status Mar 2 00:19:02.898: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Mar 2 00:19:02.903: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} STEP: watching for the Statefulset status to be patched Mar 2 00:19:02.904: INFO: Observed &StatefulSet event: ADDED Mar 2 00:19:02.904: INFO: Observed Statefulset ss in namespace statefulset-7090 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Mar 2 00:19:02.904: INFO: Observed &StatefulSet event: MODIFIED [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Mar 2 00:19:02.904: INFO: Deleting all statefulset in ns statefulset-7090 Mar 2 00:19:02.907: INFO: Scaling statefulset ss to 0 Mar 2 00:19:32.950: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:19:32.959: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:33.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7090" for this suite. • [SLOW TEST:60.213 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 should validate Statefulset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":4,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:33.052: INFO: >>> kubeConfig: /tmp/kubeconfig-2274336506 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:33.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5521" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":5,"skipped":147,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:02.056: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 2 00:19:02.082: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-9156 bc7d537f-1180-49f9-8d4c-7c6bc1bd7b53 12917 0 2023-03-02 00:19:02 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2023-03-02 00:19:02 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7vlfj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vlfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:19:02.086: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:04.091: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:06.091: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:08.090: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:10.091: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:12.093: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:14.091: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:16.090: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:18.090: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:20.092: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:22.091: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:24.090: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:26.095: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:28.091: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:30.091: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:32.092: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:34.099: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 2 00:19:34.099: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9156 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:34.099: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 Mar 2 00:19:34.100: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:34.100: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/dns-9156/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) STEP: Verifying customized DNS server is configured on pod... Mar 2 00:19:34.205: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9156 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:34.205: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 Mar 2 00:19:34.206: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:34.206: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/dns-9156/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Mar 2 00:19:34.280: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:34.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9156" for this suite. • [SLOW TEST:32.265 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":8,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:13.816: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Mar 2 00:19:14.191: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:19:14.213: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:19:16.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:18.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:20.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:22.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:24.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:26.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:28.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:30.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:32.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:19:35.270: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:35.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4722" for this suite. STEP: Destroying namespace "webhook-4722-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.601 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":7,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:34.333: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0302 00:19:35.470312 32 metrics_grabber.go:151] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled. Mar 2 00:19:35.470: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:35.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8832" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":9,"skipped":173,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:14.961: INFO: >>> kubeConfig: /tmp/kubeconfig-3747375592 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name s-test-opt-del-9fb1b303-0aa4-4e3b-becb-cfd477d80e6c STEP: Creating secret with name s-test-opt-upd-601847db-1eb1-4da0-80d9-b20c6ad28a53 STEP: Creating the pod Mar 2 00:18:15.008: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:17.013: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:19.014: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:21.030: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:23.013: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:25.013: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:27.012: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:29.013: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:31.013: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:33.012: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:35.012: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:37.012: INFO: The status of Pod pod-secrets-3d58c99e-419c-416c-af7a-6bfebcdd4226 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-9fb1b303-0aa4-4e3b-becb-cfd477d80e6c STEP: Updating secret s-test-opt-upd-601847db-1eb1-4da0-80d9-b20c6ad28a53 STEP: Creating secret with name s-test-opt-create-2a631e77-295f-4f70-a8b8-77632f756eba STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:35.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1243" for this suite. • [SLOW TEST:80.536 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:06.279: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Mar 2 00:19:06.669: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:19:06.690: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:19:08.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:10.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:12.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:14.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:16.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:18.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:20.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:22.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:24.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:26.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:28.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:30.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:32.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:19:35.723: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:35.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2683" for this suite. STEP: Destroying namespace "webhook-2683-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:29.697 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":210,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:35.979: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:36.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4701" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":7,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:21.702: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should validate Replicaset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Create a Replicaset STEP: Verify that the required pods have come up. Mar 2 00:19:21.771: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 2 00:19:26.778: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Getting /status Mar 2 00:19:36.822: INFO: Replicaset test-rs has Conditions: [] STEP: updating the Replicaset Status Mar 2 00:19:36.846: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the ReplicaSet status to be updated Mar 2 00:19:36.847: INFO: Observed &ReplicaSet event: ADDED Mar 2 00:19:36.847: INFO: Observed &ReplicaSet event: MODIFIED Mar 2 00:19:36.847: INFO: Observed &ReplicaSet event: MODIFIED Mar 2 00:19:36.847: INFO: Observed &ReplicaSet event: MODIFIED Mar 2 00:19:36.849: INFO: Found replicaset test-rs in namespace replicaset-1450 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Mar 2 00:19:36.849: INFO: Replicaset test-rs has an updated status STEP: patching the Replicaset Status Mar 2 00:19:36.849: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Mar 2 00:19:36.866: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} STEP: watching for the Replicaset status to be patched Mar 2 00:19:36.868: INFO: Observed &ReplicaSet event: ADDED Mar 2 00:19:36.868: INFO: Observed &ReplicaSet event: MODIFIED Mar 2 00:19:36.868: INFO: Observed &ReplicaSet event: MODIFIED Mar 2 00:19:36.868: INFO: Observed &ReplicaSet event: MODIFIED Mar 2 00:19:36.868: INFO: Observed replicaset test-rs in namespace replicaset-1450 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Mar 2 00:19:36.868: INFO: Observed &ReplicaSet event: MODIFIED Mar 2 00:19:36.873: INFO: Found replicaset test-rs in namespace replicaset-1450 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } Mar 2 00:19:36.873: INFO: Replicaset test-rs has a patched status [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:36.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1450" for this suite. • [SLOW TEST:15.201 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should validate Replicaset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":12,"skipped":238,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:56.429: INFO: >>> kubeConfig: /tmp/kubeconfig-3471101731 STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 STEP: Creating service test in namespace statefulset-7831 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7831 STEP: Waiting until pod test-pod will start running in namespace statefulset-7831 STEP: Creating statefulset with conflicting port in namespace statefulset-7831 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7831 Mar 2 00:18:18.525: INFO: Observed stateful pod in namespace: statefulset-7831, name: ss-0, uid: 30562e9d-f98f-4906-ac52-f3a2a285d120, status phase: Pending. Waiting for statefulset controller to delete. Mar 2 00:18:50.183: INFO: Observed stateful pod in namespace: statefulset-7831, name: ss-0, uid: 30562e9d-f98f-4906-ac52-f3a2a285d120, status phase: Failed. Waiting for statefulset controller to delete. Mar 2 00:18:50.211: INFO: Observed stateful pod in namespace: statefulset-7831, name: ss-0, uid: 30562e9d-f98f-4906-ac52-f3a2a285d120, status phase: Failed. Waiting for statefulset controller to delete. Mar 2 00:18:50.224: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7831 STEP: Removing pod with conflicting port in namespace statefulset-7831 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7831 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Mar 2 00:19:18.346: INFO: Deleting all statefulset in ns statefulset-7831 Mar 2 00:19:18.349: INFO: Scaling statefulset ss to 0 Mar 2 00:19:38.385: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:19:38.404: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:38.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7831" for this suite. • [SLOW TEST:162.110 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":5,"skipped":104,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:28.249: INFO: >>> kubeConfig: /tmp/kubeconfig-2889160273 STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:39.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6849" for this suite. • [SLOW TEST:11.268 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":5,"skipped":173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:54.064: INFO: >>> kubeConfig: /tmp/kubeconfig-1090799654 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:18:54.471: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:18:56.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:18:58.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:00.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:02.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:04.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:06.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:08.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:10.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:12.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 18, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:19:15.498: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 2 00:19:41.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1090799654 --namespace=webhook-7322 attach --namespace=webhook-7322 to-be-attached-pod -i -c=container1' Mar 2 00:19:41.592: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:41.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7322" for this suite. STEP: Destroying namespace "webhook-7322-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:48.002 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":3,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:51.808: INFO: >>> kubeConfig: /tmp/kubeconfig-3780238515 STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-test-volume-map-409c43ce-8ddf-454a-a901-3147bbd1ff84 STEP: Creating a pod to test consume configMaps Mar 2 00:18:51.844: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970" in namespace "configmap-8311" to be "Succeeded or Failed" Mar 2 00:18:51.848: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74555ms Mar 2 00:18:53.852: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007459906s Mar 2 00:18:55.857: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012511378s Mar 2 00:18:57.861: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016333084s Mar 2 00:18:59.864: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020046942s Mar 2 00:19:01.869: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024525569s Mar 2 00:19:03.873: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029034361s Mar 2 00:19:05.877: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 14.032451678s Mar 2 00:19:07.881: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 16.03686302s Mar 2 00:19:09.885: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 18.041041868s Mar 2 00:19:11.891: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 20.047233354s Mar 2 00:19:13.901: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 22.056525584s Mar 2 00:19:15.905: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 24.061224341s Mar 2 00:19:17.909: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 26.065193911s Mar 2 00:19:19.913: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 28.068931888s Mar 2 00:19:21.917: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 30.072972167s Mar 2 00:19:23.924: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 32.079813981s Mar 2 00:19:25.935: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 34.090647019s Mar 2 00:19:27.940: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 36.096085363s Mar 2 00:19:29.948: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 38.103858287s Mar 2 00:19:31.952: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 40.10812999s Mar 2 00:19:33.972: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 42.128259505s Mar 2 00:19:35.980: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 44.135570954s Mar 2 00:19:37.997: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 46.153258428s Mar 2 00:19:40.020: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Pending", Reason="", readiness=false. Elapsed: 48.175722085s Mar 2 00:19:42.040: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.195373862s STEP: Saw pod success Mar 2 00:19:42.040: INFO: Pod "pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970" satisfied condition "Succeeded or Failed" Mar 2 00:19:42.063: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970 container agnhost-container: STEP: delete the pod Mar 2 00:19:42.195: INFO: Waiting for pod pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970 to disappear Mar 2 00:19:42.250: INFO: Pod pod-configmaps-6f9c4052-559e-463b-92c4-4b776f8d1970 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:42.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8311" for this suite. • [SLOW TEST:50.509 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:22.158: INFO: >>> kubeConfig: /tmp/kubeconfig-3970272666 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:19:22.191: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34" in namespace "projected-3352" to be "Succeeded or Failed" Mar 2 00:19:22.196: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.929358ms Mar 2 00:19:24.200: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008964355s Mar 2 00:19:26.211: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01972172s Mar 2 00:19:28.219: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028044401s Mar 2 00:19:30.225: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03456564s Mar 2 00:19:32.233: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042362366s Mar 2 00:19:34.250: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 12.059558042s Mar 2 00:19:36.275: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 14.084481717s Mar 2 00:19:38.302: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 16.111276171s Mar 2 00:19:40.327: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Pending", Reason="", readiness=false. Elapsed: 18.135765254s Mar 2 00:19:42.358: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.167646859s STEP: Saw pod success Mar 2 00:19:42.358: INFO: Pod "downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34" satisfied condition "Succeeded or Failed" Mar 2 00:19:42.374: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34 container client-container: STEP: delete the pod Mar 2 00:19:42.542: INFO: Waiting for pod downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34 to disappear Mar 2 00:19:42.582: INFO: Pod downwardapi-volume-fade024e-18f0-4b24-9faf-5fb0cf0a1e34 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:42.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3352" for this suite. • [SLOW TEST:20.527 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:31.156: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:44.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4900" for this suite. • [SLOW TEST:13.327 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":12,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:23.559: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name projected-configmap-test-volume-b6da0d76-5315-4fcf-b57c-7c5c5e41f22e STEP: Creating a pod to test consume configMaps Mar 2 00:19:23.612: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be" in namespace "projected-4285" to be "Succeeded or Failed" Mar 2 00:19:23.616: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014361ms Mar 2 00:19:25.621: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00860614s Mar 2 00:19:27.632: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019713535s Mar 2 00:19:29.642: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029798266s Mar 2 00:19:31.647: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035003411s Mar 2 00:19:33.671: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058528581s Mar 2 00:19:35.676: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063878926s Mar 2 00:19:37.708: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 14.096202263s Mar 2 00:19:39.734: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 16.121475423s Mar 2 00:19:41.756: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 18.143979073s Mar 2 00:19:43.761: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Pending", Reason="", readiness=false. Elapsed: 20.148646148s Mar 2 00:19:45.796: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.183525516s STEP: Saw pod success Mar 2 00:19:45.796: INFO: Pod "pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be" satisfied condition "Succeeded or Failed" Mar 2 00:19:45.814: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be container agnhost-container: STEP: delete the pod Mar 2 00:19:45.956: INFO: Waiting for pod pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be to disappear Mar 2 00:19:46.022: INFO: Pod pod-projected-configmaps-0ce36059-1406-4045-b44e-a2a51cb289be no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:46.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4285" for this suite. • [SLOW TEST:22.528 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:31.836: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2984.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2984.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2984.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2984.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 2 00:19:46.168: INFO: DNS probes using dns-2984/dns-test-014a0994-1c53-42b0-9e7a-ce67e86513a2 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:46.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2984" for this suite. • [SLOW TEST:14.771 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:46.632: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: starting the proxy server Mar 2 00:19:46.652: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-2977999873 --namespace=kubectl-27 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:46.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-27" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":10,"skipped":305,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:46.708: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create deployment with httpd image Mar 2 00:19:46.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2977999873 --namespace=kubectl-7624 create -f -' Mar 2 00:19:46.852: INFO: stderr: "" Mar 2 00:19:46.852: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Mar 2 00:19:46.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2977999873 --namespace=kubectl-7624 diff -f -' Mar 2 00:19:46.977: INFO: rc: 1 Mar 2 00:19:46.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-2977999873 --namespace=kubectl-7624 delete -f -' Mar 2 00:19:47.042: INFO: stderr: "" Mar 2 00:19:47.042: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:47.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7624" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":11,"skipped":322,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:20.914: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:19:20.957: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8" in namespace "security-context-test-3340" to be "Succeeded or Failed" Mar 2 00:19:20.967: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.786418ms Mar 2 00:19:22.972: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014792554s Mar 2 00:19:24.976: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019244325s Mar 2 00:19:26.982: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025246386s Mar 2 00:19:28.995: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038598207s Mar 2 00:19:31.014: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057184612s Mar 2 00:19:33.023: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.06587622s Mar 2 00:19:35.028: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.071438024s Mar 2 00:19:37.047: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.090254557s Mar 2 00:19:39.083: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.12647107s Mar 2 00:19:41.105: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.148237999s Mar 2 00:19:43.126: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.168932058s Mar 2 00:19:45.156: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.199712132s Mar 2 00:19:47.167: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.209811211s Mar 2 00:19:47.167: INFO: Pod "busybox-readonly-false-dc92edea-275b-4c41-8466-774bf5a895c8" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:47.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3340" for this suite. • [SLOW TEST:26.303 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":112,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:23.705: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:19:23.732: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 STEP: creating the pod STEP: submitting the pod to kubernetes Mar 2 00:19:23.759: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:25.764: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:27.770: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:29.771: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:31.765: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:33.776: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:35.766: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:37.783: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:39.770: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:41.776: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:43.763: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:45.782: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:47.765: INFO: The status of Pod pod-exec-websocket-0d57d9c1-015e-4716-863e-1aa51efa62c4 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:47.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7222" for this suite. • [SLOW TEST:24.129 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":136,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:25.902: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 2 00:19:25.954: INFO: Waiting up to 5m0s for pod "pod-50eda472-f5b9-4193-a666-9f396d18489a" in namespace "emptydir-2815" to be "Succeeded or Failed" Mar 2 00:19:25.961: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.193186ms Mar 2 00:19:27.968: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014055613s Mar 2 00:19:29.976: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021454677s Mar 2 00:19:31.980: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026291794s Mar 2 00:19:33.987: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032953182s Mar 2 00:19:35.995: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.040745784s Mar 2 00:19:38.012: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.058269897s Mar 2 00:19:40.035: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.081000862s Mar 2 00:19:42.055: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.101121198s Mar 2 00:19:44.061: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.106904474s Mar 2 00:19:46.082: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.128420364s Mar 2 00:19:48.095: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.141141624s Mar 2 00:19:50.102: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.147808638s STEP: Saw pod success Mar 2 00:19:50.102: INFO: Pod "pod-50eda472-f5b9-4193-a666-9f396d18489a" satisfied condition "Succeeded or Failed" Mar 2 00:19:50.105: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-50eda472-f5b9-4193-a666-9f396d18489a container test-container: STEP: delete the pod Mar 2 00:19:50.126: INFO: Waiting for pod pod-50eda472-f5b9-4193-a666-9f396d18489a to disappear Mar 2 00:19:50.132: INFO: Pod pod-50eda472-f5b9-4193-a666-9f396d18489a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:50.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2815" for this suite. • [SLOW TEST:24.238 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:13.287: INFO: >>> kubeConfig: /tmp/kubeconfig-446130155 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:19:13.330: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01" in namespace "downward-api-9639" to be "Succeeded or Failed" Mar 2 00:19:13.340: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 10.180606ms Mar 2 00:19:15.348: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018119171s Mar 2 00:19:17.355: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025482672s Mar 2 00:19:19.361: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031051663s Mar 2 00:19:21.365: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035570929s Mar 2 00:19:23.370: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 10.040422413s Mar 2 00:19:25.374: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 12.044336745s Mar 2 00:19:27.390: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 14.059925367s Mar 2 00:19:29.407: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 16.076887973s Mar 2 00:19:31.412: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 18.082557058s Mar 2 00:19:33.446: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 20.116360249s Mar 2 00:19:35.452: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 22.122216734s Mar 2 00:19:37.477: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 24.14695459s Mar 2 00:19:39.509: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 26.179273185s Mar 2 00:19:41.537: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207594237s Mar 2 00:19:43.541: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 30.211445754s Mar 2 00:19:45.560: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 32.230841606s Mar 2 00:19:47.574: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 34.244333518s Mar 2 00:19:49.581: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Pending", Reason="", readiness=false. Elapsed: 36.250899509s Mar 2 00:19:51.585: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.255298022s STEP: Saw pod success Mar 2 00:19:51.585: INFO: Pod "downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01" satisfied condition "Succeeded or Failed" Mar 2 00:19:51.588: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01 container client-container: STEP: delete the pod Mar 2 00:19:51.616: INFO: Waiting for pod downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01 to disappear Mar 2 00:19:51.642: INFO: Pod downwardapi-volume-7a11acae-cb8b-43dd-b81b-eab12a8acb01 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:51.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9639" for this suite. • [SLOW TEST:38.369 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":66,"failed":0} SSSSSS ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:27.313: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 2 00:19:27.373: INFO: Waiting up to 5m0s for pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1" in namespace "emptydir-8378" to be "Succeeded or Failed" Mar 2 00:19:27.389: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.65522ms Mar 2 00:19:29.406: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033294321s Mar 2 00:19:31.414: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041160697s Mar 2 00:19:33.445: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071560888s Mar 2 00:19:35.449: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076059134s Mar 2 00:19:37.472: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098881183s Mar 2 00:19:39.504: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.131112622s Mar 2 00:19:41.519: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.145526504s Mar 2 00:19:43.525: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.151968217s Mar 2 00:19:45.551: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.178191111s Mar 2 00:19:47.559: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.185486509s Mar 2 00:19:49.564: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.190498098s Mar 2 00:19:51.568: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.194562627s Mar 2 00:19:53.572: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.19890042s STEP: Saw pod success Mar 2 00:19:53.572: INFO: Pod "pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1" satisfied condition "Succeeded or Failed" Mar 2 00:19:53.575: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1 container test-container: STEP: delete the pod Mar 2 00:19:53.592: INFO: Waiting for pod pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1 to disappear Mar 2 00:19:53.598: INFO: Pod pod-aa14653a-ccb9-48cc-9ad3-e011db6faae1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:53.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8378" for this suite. • [SLOW TEST:26.294 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:16.651: INFO: >>> kubeConfig: /tmp/kubeconfig-3654524011 STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Mar 2 00:19:16.702: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 2 00:19:21.708: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:53.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6450" for this suite. • [SLOW TEST:37.101 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":4,"skipped":121,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:53.627: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:19:53.649: INFO: Got root ca configmap in namespace "svcaccounts-267" Mar 2 00:19:53.655: INFO: Deleted root ca configmap in namespace "svcaccounts-267" STEP: waiting for a new root ca configmap created Mar 2 00:19:54.160: INFO: Recreated root ca configmap in namespace "svcaccounts-267" Mar 2 00:19:54.165: INFO: Updated root ca configmap in namespace "svcaccounts-267" STEP: waiting for the root ca configmap reconciled Mar 2 00:19:54.669: INFO: Reconciled root ca configmap in namespace "svcaccounts-267" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:54.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-267" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":7,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:28.647: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:19:29.121: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:19:31.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:33.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:35.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:37.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:39.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:41.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:43.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:45.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:47.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:49.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:51.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:53.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 29, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:19:56.156: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:56.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7060" for this suite. STEP: Destroying namespace "webhook-7060-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:27.785 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:56.443: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:56.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7269" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":7,"skipped":129,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:26.166: INFO: >>> kubeConfig: /tmp/kubeconfig-1722668237 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward api env vars Mar 2 00:19:26.201: INFO: Waiting up to 5m0s for pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59" in namespace "downward-api-5103" to be "Succeeded or Failed" Mar 2 00:19:26.212: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 10.995344ms Mar 2 00:19:28.220: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018968421s Mar 2 00:19:30.226: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024607885s Mar 2 00:19:32.233: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031967822s Mar 2 00:19:34.259: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058147304s Mar 2 00:19:36.280: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078575003s Mar 2 00:19:38.311: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 12.110043259s Mar 2 00:19:40.331: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 14.1300057s Mar 2 00:19:42.365: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 16.164122137s Mar 2 00:19:44.369: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 18.167644722s Mar 2 00:19:46.407: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 20.205984872s Mar 2 00:19:48.411: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 22.210388664s Mar 2 00:19:50.417: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 24.215632808s Mar 2 00:19:52.421: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 26.220327916s Mar 2 00:19:54.425: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 28.224412772s Mar 2 00:19:56.434: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.232582982s STEP: Saw pod success Mar 2 00:19:56.434: INFO: Pod "downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59" satisfied condition "Succeeded or Failed" Mar 2 00:19:56.440: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59 container dapi-container: STEP: delete the pod Mar 2 00:19:56.483: INFO: Waiting for pod downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59 to disappear Mar 2 00:19:56.503: INFO: Pod downward-api-4ba449e5-4170-43d4-9380-6eb707c2bf59 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:56.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5103" for this suite. • [SLOW TEST:30.357 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:30.939: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 2 00:19:31.000: INFO: Waiting up to 5m0s for pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a" in namespace "emptydir-5871" to be "Succeeded or Failed" Mar 2 00:19:31.008: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.463127ms Mar 2 00:19:33.015: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01427116s Mar 2 00:19:35.020: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019834887s Mar 2 00:19:37.035: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034532952s Mar 2 00:19:39.077: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076736441s Mar 2 00:19:41.097: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.096490127s Mar 2 00:19:43.117: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.116259721s Mar 2 00:19:45.160: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.159292115s Mar 2 00:19:47.172: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.172007852s Mar 2 00:19:49.177: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.176364216s Mar 2 00:19:51.182: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.181999906s Mar 2 00:19:53.187: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.186824323s Mar 2 00:19:55.194: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.193630087s Mar 2 00:19:57.199: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.198335872s STEP: Saw pod success Mar 2 00:19:57.199: INFO: Pod "pod-ae42d812-7e01-4a9f-b799-f842c9a9819a" satisfied condition "Succeeded or Failed" Mar 2 00:19:57.202: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod pod-ae42d812-7e01-4a9f-b799-f842c9a9819a container test-container: STEP: delete the pod Mar 2 00:19:57.223: INFO: Waiting for pod pod-ae42d812-7e01-4a9f-b799-f842c9a9819a to disappear Mar 2 00:19:57.231: INFO: Pod pod-ae42d812-7e01-4a9f-b799-f842c9a9819a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:57.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5871" for this suite. • [SLOW TEST:26.307 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":185,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:57.249: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:57.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6976" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:44.495: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating the pod Mar 2 00:19:44.532: INFO: The status of Pod labelsupdate554047be-c948-4e7c-b96f-c27a87c05f9b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:46.583: INFO: The status of Pod labelsupdate554047be-c948-4e7c-b96f-c27a87c05f9b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:48.537: INFO: The status of Pod labelsupdate554047be-c948-4e7c-b96f-c27a87c05f9b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:50.537: INFO: The status of Pod labelsupdate554047be-c948-4e7c-b96f-c27a87c05f9b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:52.537: INFO: The status of Pod labelsupdate554047be-c948-4e7c-b96f-c27a87c05f9b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:54.536: INFO: The status of Pod labelsupdate554047be-c948-4e7c-b96f-c27a87c05f9b is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:56.541: INFO: The status of Pod labelsupdate554047be-c948-4e7c-b96f-c27a87c05f9b is Running (Ready = true) Mar 2 00:19:57.068: INFO: Successfully updated pod "labelsupdate554047be-c948-4e7c-b96f-c27a87c05f9b" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:19:59.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1463" for this suite. • [SLOW TEST:14.602 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":296,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:26.242: INFO: >>> kubeConfig: /tmp/kubeconfig-2153458730 STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod busybox-c377a2eb-f6f1-41ea-94c9-ed47add00e75 in namespace container-probe-7189 Mar 2 00:18:56.277: INFO: Started pod busybox-c377a2eb-f6f1-41ea-94c9-ed47add00e75 in namespace container-probe-7189 STEP: checking the pod's current state and verifying that restartCount is present Mar 2 00:18:56.280: INFO: Initial restart count of pod busybox-c377a2eb-f6f1-41ea-94c9-ed47add00e75 is 0 Mar 2 00:20:00.581: INFO: Restart count of pod container-probe-7189/busybox-c377a2eb-f6f1-41ea-94c9-ed47add00e75 is now 1 (1m4.301811662s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:00.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7189" for this suite. • [SLOW TEST:94.361 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:45.030: INFO: >>> kubeConfig: /tmp/kubeconfig-3825818549 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating service endpoint-test2 in namespace services-9160 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9160 to expose endpoints map[] Mar 2 00:18:45.100: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Mar 2 00:18:46.110: INFO: successfully validated that service endpoint-test2 in namespace services-9160 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9160 Mar 2 00:18:46.126: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:48.131: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:50.132: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:52.130: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:54.130: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:56.129: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:58.130: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9160 to expose endpoints map[pod1:[80]] Mar 2 00:18:58.143: INFO: successfully validated that service endpoint-test2 in namespace services-9160 exposes endpoints map[pod1:[80]] STEP: Checking if the Service forwards traffic to pod1 Mar 2 00:18:58.143: INFO: Creating new exec pod Mar 2 00:19:35.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3825818549 --namespace=services-9160 exec execpoddj7dt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Mar 2 00:19:35.271: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Mar 2 00:19:35.271: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:19:35.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3825818549 --namespace=services-9160 exec execpoddj7dt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.88.156 80' Mar 2 00:19:35.412: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.88.156 80\nConnection to 10.43.88.156 80 port [tcp/http] succeeded!\n" Mar 2 00:19:35.412: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" STEP: Creating pod pod2 in namespace services-9160 Mar 2 00:19:35.440: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:37.464: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:39.452: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:41.465: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:43.445: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:45.483: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:47.445: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:49.444: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:51.445: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:53.444: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:55.445: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:57.446: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9160 to expose endpoints map[pod1:[80] pod2:[80]] Mar 2 00:19:57.464: INFO: successfully validated that service endpoint-test2 in namespace services-9160 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Checking if the Service forwards traffic to pod1 and pod2 Mar 2 00:19:58.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3825818549 --namespace=services-9160 exec execpoddj7dt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Mar 2 00:19:58.564: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Mar 2 00:19:58.564: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:19:58.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3825818549 --namespace=services-9160 exec execpoddj7dt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.88.156 80' Mar 2 00:19:58.695: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.88.156 80\nConnection to 10.43.88.156 80 port [tcp/http] succeeded!\n" Mar 2 00:19:58.695: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" STEP: Deleting pod pod1 in namespace services-9160 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9160 to expose endpoints map[pod2:[80]] Mar 2 00:19:58.769: INFO: successfully validated that service endpoint-test2 in namespace services-9160 exposes endpoints map[pod2:[80]] STEP: Checking if the Service forwards traffic to pod2 Mar 2 00:19:59.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3825818549 --namespace=services-9160 exec execpoddj7dt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Mar 2 00:19:59.842: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Mar 2 00:19:59.842: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:19:59.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3825818549 --namespace=services-9160 exec execpoddj7dt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.88.156 80' Mar 2 00:19:59.971: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.88.156 80\nConnection to 10.43.88.156 80 port [tcp/http] succeeded!\n" Mar 2 00:19:59.971: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" STEP: Deleting pod pod2 in namespace services-9160 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9160 to expose endpoints map[] Mar 2 00:20:01.011: INFO: successfully validated that service endpoint-test2 in namespace services-9160 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:01.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9160" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:76.011 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":4,"skipped":96,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:20:00.615: INFO: >>> kubeConfig: /tmp/kubeconfig-2153458730 STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 2 00:20:01.237: INFO: starting watch STEP: patching STEP: updating Mar 2 00:20:01.246: INFO: waiting for watch events with expected annotations Mar 2 00:20:01.246: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:01.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-7115" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:20:01.044: INFO: >>> kubeConfig: /tmp/kubeconfig-3825818549 STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:20:01.381: INFO: Checking APIGroup: apiregistration.k8s.io Mar 2 00:20:01.382: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Mar 2 00:20:01.382: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] Mar 2 00:20:01.382: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Mar 2 00:20:01.382: INFO: Checking APIGroup: apps Mar 2 00:20:01.382: INFO: PreferredVersion.GroupVersion: apps/v1 Mar 2 00:20:01.382: INFO: Versions found [{apps/v1 v1}] Mar 2 00:20:01.382: INFO: apps/v1 matches apps/v1 Mar 2 00:20:01.382: INFO: Checking APIGroup: events.k8s.io Mar 2 00:20:01.383: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Mar 2 00:20:01.383: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Mar 2 00:20:01.383: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Mar 2 00:20:01.383: INFO: Checking APIGroup: authentication.k8s.io Mar 2 00:20:01.383: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Mar 2 00:20:01.383: INFO: Versions found [{authentication.k8s.io/v1 v1}] Mar 2 00:20:01.383: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Mar 2 00:20:01.383: INFO: Checking APIGroup: authorization.k8s.io Mar 2 00:20:01.384: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Mar 2 00:20:01.384: INFO: Versions found [{authorization.k8s.io/v1 v1}] Mar 2 00:20:01.384: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Mar 2 00:20:01.384: INFO: Checking APIGroup: autoscaling Mar 2 00:20:01.384: INFO: PreferredVersion.GroupVersion: autoscaling/v2 Mar 2 00:20:01.384: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Mar 2 00:20:01.384: INFO: autoscaling/v2 matches autoscaling/v2 Mar 2 00:20:01.384: INFO: Checking APIGroup: batch Mar 2 00:20:01.385: INFO: PreferredVersion.GroupVersion: batch/v1 Mar 2 00:20:01.385: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Mar 2 00:20:01.385: INFO: batch/v1 matches batch/v1 Mar 2 00:20:01.385: INFO: Checking APIGroup: certificates.k8s.io Mar 2 00:20:01.385: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Mar 2 00:20:01.385: INFO: Versions found [{certificates.k8s.io/v1 v1}] Mar 2 00:20:01.385: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Mar 2 00:20:01.385: INFO: Checking APIGroup: networking.k8s.io Mar 2 00:20:01.385: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Mar 2 00:20:01.385: INFO: Versions found [{networking.k8s.io/v1 v1}] Mar 2 00:20:01.385: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Mar 2 00:20:01.385: INFO: Checking APIGroup: policy Mar 2 00:20:01.386: INFO: PreferredVersion.GroupVersion: policy/v1 Mar 2 00:20:01.386: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Mar 2 00:20:01.386: INFO: policy/v1 matches policy/v1 Mar 2 00:20:01.386: INFO: Checking APIGroup: rbac.authorization.k8s.io Mar 2 00:20:01.386: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Mar 2 00:20:01.386: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] Mar 2 00:20:01.386: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Mar 2 00:20:01.386: INFO: Checking APIGroup: storage.k8s.io Mar 2 00:20:01.387: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Mar 2 00:20:01.387: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Mar 2 00:20:01.387: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Mar 2 00:20:01.387: INFO: Checking APIGroup: admissionregistration.k8s.io Mar 2 00:20:01.387: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Mar 2 00:20:01.387: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] Mar 2 00:20:01.387: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Mar 2 00:20:01.387: INFO: Checking APIGroup: apiextensions.k8s.io Mar 2 00:20:01.388: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Mar 2 00:20:01.388: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] Mar 2 00:20:01.388: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Mar 2 00:20:01.388: INFO: Checking APIGroup: scheduling.k8s.io Mar 2 00:20:01.388: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Mar 2 00:20:01.388: INFO: Versions found [{scheduling.k8s.io/v1 v1}] Mar 2 00:20:01.388: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Mar 2 00:20:01.388: INFO: Checking APIGroup: coordination.k8s.io Mar 2 00:20:01.388: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Mar 2 00:20:01.388: INFO: Versions found [{coordination.k8s.io/v1 v1}] Mar 2 00:20:01.388: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Mar 2 00:20:01.388: INFO: Checking APIGroup: node.k8s.io Mar 2 00:20:01.389: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Mar 2 00:20:01.389: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Mar 2 00:20:01.389: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Mar 2 00:20:01.389: INFO: Checking APIGroup: discovery.k8s.io Mar 2 00:20:01.389: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Mar 2 00:20:01.389: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Mar 2 00:20:01.389: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Mar 2 00:20:01.389: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Mar 2 00:20:01.390: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 Mar 2 00:20:01.390: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Mar 2 00:20:01.390: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 Mar 2 00:20:01.390: INFO: Checking APIGroup: helm.cattle.io Mar 2 00:20:01.390: INFO: PreferredVersion.GroupVersion: helm.cattle.io/v1 Mar 2 00:20:01.390: INFO: Versions found [{helm.cattle.io/v1 v1}] Mar 2 00:20:01.390: INFO: helm.cattle.io/v1 matches helm.cattle.io/v1 Mar 2 00:20:01.390: INFO: Checking APIGroup: k3s.cattle.io Mar 2 00:20:01.390: INFO: PreferredVersion.GroupVersion: k3s.cattle.io/v1 Mar 2 00:20:01.390: INFO: Versions found [{k3s.cattle.io/v1 v1}] Mar 2 00:20:01.390: INFO: k3s.cattle.io/v1 matches k3s.cattle.io/v1 Mar 2 00:20:01.390: INFO: Checking APIGroup: metrics.k8s.io Mar 2 00:20:01.391: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 Mar 2 00:20:01.391: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] Mar 2 00:20:01.391: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:01.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-8626" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":5,"skipped":101,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:46.124: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod to test downward API volume plugin Mar 2 00:19:46.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187" in namespace "projected-8798" to be "Succeeded or Failed" Mar 2 00:19:46.470: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187": Phase="Pending", Reason="", readiness=false. Elapsed: 62.615892ms Mar 2 00:19:48.476: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068495465s Mar 2 00:19:50.481: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07360843s Mar 2 00:19:52.486: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078213588s Mar 2 00:19:54.489: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081359242s Mar 2 00:19:56.505: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097158544s Mar 2 00:19:58.509: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187": Phase="Pending", Reason="", readiness=false. Elapsed: 12.101430951s Mar 2 00:20:00.514: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187": Phase="Pending", Reason="", readiness=false. Elapsed: 14.106822348s Mar 2 00:20:02.521: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.113814351s STEP: Saw pod success Mar 2 00:20:02.521: INFO: Pod "downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187" satisfied condition "Succeeded or Failed" Mar 2 00:20:02.527: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187 container client-container: STEP: delete the pod Mar 2 00:20:02.554: INFO: Waiting for pod downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187 to disappear Mar 2 00:20:02.562: INFO: Pod downwardapi-volume-9940ae53-1aca-4913-a7dc-93b792e09187 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:02.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8798" for this suite. • [SLOW TEST:16.450 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":271,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:20.968: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wt44l in namespace proxy-1579 I0302 00:19:21.021735 366 runners.go:193] Created replication controller with name: proxy-service-wt44l, namespace: proxy-1579, replica count: 1 I0302 00:19:22.072398 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:23.072482 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:24.072615 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:25.072696 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:26.072776 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:27.072863 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:28.072953 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:29.073289 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:30.073379 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:31.073655 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:32.073728 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:33.073843 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:34.074743 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:35.075847 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:36.076748 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:37.076835 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:38.076919 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:39.077493 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:40.077603 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:41.077697 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:42.077828 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:43.078901 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:44.079060 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:45.080066 366 runners.go:193] proxy-service-wt44l Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:19:45.086: INFO: setup took 24.09281617s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 2 00:19:45.104: INFO: (0) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 17.569805ms) Mar 2 00:19:45.133: INFO: (0) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 47.087401ms) Mar 2 00:19:45.134: INFO: (0) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test (200; 55.985605ms) Mar 2 00:19:45.143: INFO: (0) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 56.795823ms) Mar 2 00:19:45.143: INFO: (0) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 56.988518ms) Mar 2 00:19:45.143: INFO: (0) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 57.057901ms) Mar 2 00:19:45.144: INFO: (0) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 57.230909ms) Mar 2 00:19:45.144: INFO: (0) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 57.562658ms) Mar 2 00:19:45.144: INFO: (0) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 57.755926ms) Mar 2 00:19:45.146: INFO: (0) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 60.213721ms) Mar 2 00:19:45.153: INFO: (0) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 66.790316ms) Mar 2 00:19:45.155: INFO: (0) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 68.291376ms) Mar 2 00:19:45.156: INFO: (0) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 69.563812ms) Mar 2 00:19:45.162: INFO: (0) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 76.031199ms) Mar 2 00:19:45.162: INFO: (0) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 76.167167ms) Mar 2 00:19:45.198: INFO: (1) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 35.383553ms) Mar 2 00:19:45.211: INFO: (1) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 48.360719ms) Mar 2 00:19:45.211: INFO: (1) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 48.373774ms) Mar 2 00:19:45.211: INFO: (1) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 48.588461ms) Mar 2 00:19:45.211: INFO: (1) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 48.691777ms) Mar 2 00:19:45.211: INFO: (1) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 48.768904ms) Mar 2 00:19:45.211: INFO: (1) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 48.674695ms) Mar 2 00:19:45.211: INFO: (1) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 48.790235ms) Mar 2 00:19:45.212: INFO: (1) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test<... (200; 40.770179ms) Mar 2 00:19:45.264: INFO: (2) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 40.814053ms) Mar 2 00:19:45.264: INFO: (2) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 40.928569ms) Mar 2 00:19:45.265: INFO: (2) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 41.030643ms) Mar 2 00:19:45.265: INFO: (2) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 41.088814ms) Mar 2 00:19:45.265: INFO: (2) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 41.176941ms) Mar 2 00:19:45.265: INFO: (2) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 41.547505ms) Mar 2 00:19:45.265: INFO: (2) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 41.589204ms) Mar 2 00:19:45.268: INFO: (2) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 44.159343ms) Mar 2 00:19:45.268: INFO: (2) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 44.140497ms) Mar 2 00:19:45.269: INFO: (2) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 45.117783ms) Mar 2 00:19:45.272: INFO: (2) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 48.506325ms) Mar 2 00:19:45.273: INFO: (2) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 49.020552ms) Mar 2 00:19:45.273: INFO: (2) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 49.322976ms) Mar 2 00:19:45.303: INFO: (3) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 29.964134ms) Mar 2 00:19:45.304: INFO: (3) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 30.709609ms) Mar 2 00:19:45.318: INFO: (3) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 45.138582ms) Mar 2 00:19:45.318: INFO: (3) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 45.519705ms) Mar 2 00:19:45.319: INFO: (3) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 45.7063ms) Mar 2 00:19:45.319: INFO: (3) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 45.950574ms) Mar 2 00:19:45.319: INFO: (3) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 46.463427ms) Mar 2 00:19:45.320: INFO: (3) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: ... (200; 47.922607ms) Mar 2 00:19:45.321: INFO: (3) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 47.955219ms) Mar 2 00:19:45.321: INFO: (3) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 48.116165ms) Mar 2 00:19:45.322: INFO: (3) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 48.546783ms) Mar 2 00:19:45.329: INFO: (3) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 56.172049ms) Mar 2 00:19:45.330: INFO: (3) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 56.870675ms) Mar 2 00:19:45.333: INFO: (3) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 59.68651ms) Mar 2 00:19:45.333: INFO: (3) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 59.704023ms) Mar 2 00:19:45.354: INFO: (4) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 21.608442ms) Mar 2 00:19:45.381: INFO: (4) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test (200; 64.625487ms) Mar 2 00:19:45.398: INFO: (4) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 64.758059ms) Mar 2 00:19:45.398: INFO: (4) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 65.045605ms) Mar 2 00:19:45.398: INFO: (4) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 65.103354ms) Mar 2 00:19:45.398: INFO: (4) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 65.328852ms) Mar 2 00:19:45.398: INFO: (4) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 65.676462ms) Mar 2 00:19:45.403: INFO: (4) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 69.970834ms) Mar 2 00:19:45.411: INFO: (4) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 78.414464ms) Mar 2 00:19:45.416: INFO: (4) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 83.286523ms) Mar 2 00:19:45.420: INFO: (4) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 87.130802ms) Mar 2 00:19:45.420: INFO: (4) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 87.516313ms) Mar 2 00:19:45.421: INFO: (4) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 87.860917ms) Mar 2 00:19:45.463: INFO: (5) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 42.090766ms) Mar 2 00:19:45.464: INFO: (5) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 43.18849ms) Mar 2 00:19:45.477: INFO: (5) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test (200; 56.18912ms) Mar 2 00:19:45.477: INFO: (5) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 56.277188ms) Mar 2 00:19:45.477: INFO: (5) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 56.169684ms) Mar 2 00:19:45.477: INFO: (5) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 56.341499ms) Mar 2 00:19:45.477: INFO: (5) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 56.501774ms) Mar 2 00:19:45.477: INFO: (5) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 56.603898ms) Mar 2 00:19:45.478: INFO: (5) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 57.055455ms) Mar 2 00:19:45.485: INFO: (5) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 63.736218ms) Mar 2 00:19:45.488: INFO: (5) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 66.854358ms) Mar 2 00:19:45.489: INFO: (5) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 68.22516ms) Mar 2 00:19:45.489: INFO: (5) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 68.357431ms) Mar 2 00:19:45.491: INFO: (5) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 70.651546ms) Mar 2 00:19:45.491: INFO: (5) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 70.705238ms) Mar 2 00:19:45.512: INFO: (6) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 20.221669ms) Mar 2 00:19:45.513: INFO: (6) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 21.419934ms) Mar 2 00:19:45.535: INFO: (6) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 43.664924ms) Mar 2 00:19:45.539: INFO: (6) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 47.192321ms) Mar 2 00:19:45.548: INFO: (6) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 56.126743ms) Mar 2 00:19:45.548: INFO: (6) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 56.699821ms) Mar 2 00:19:45.549: INFO: (6) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 57.090542ms) Mar 2 00:19:45.549: INFO: (6) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 57.206572ms) Mar 2 00:19:45.549: INFO: (6) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 57.478118ms) Mar 2 00:19:45.549: INFO: (6) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test (200; 22.116006ms) Mar 2 00:19:45.595: INFO: (7) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 33.892643ms) Mar 2 00:19:45.595: INFO: (7) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 34.273255ms) Mar 2 00:19:45.607: INFO: (7) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 46.361134ms) Mar 2 00:19:45.608: INFO: (7) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 47.609463ms) Mar 2 00:19:45.609: INFO: (7) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 47.920563ms) Mar 2 00:19:45.609: INFO: (7) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 47.943216ms) Mar 2 00:19:45.609: INFO: (7) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 47.963425ms) Mar 2 00:19:45.609: INFO: (7) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 48.010143ms) Mar 2 00:19:45.609: INFO: (7) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test<... (200; 31.612353ms) Mar 2 00:19:45.651: INFO: (8) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 31.662819ms) Mar 2 00:19:45.655: INFO: (8) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 35.963985ms) Mar 2 00:19:45.664: INFO: (8) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 45.254291ms) Mar 2 00:19:45.664: INFO: (8) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 45.261385ms) Mar 2 00:19:45.665: INFO: (8) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 45.773968ms) Mar 2 00:19:45.665: INFO: (8) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 45.791652ms) Mar 2 00:19:45.665: INFO: (8) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 45.927309ms) Mar 2 00:19:45.665: INFO: (8) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 46.135333ms) Mar 2 00:19:45.665: INFO: (8) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: ... (200; 29.399763ms) Mar 2 00:19:45.702: INFO: (9) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 29.49844ms) Mar 2 00:19:45.717: INFO: (9) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 44.230097ms) Mar 2 00:19:45.717: INFO: (9) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 44.256487ms) Mar 2 00:19:45.717: INFO: (9) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 44.47961ms) Mar 2 00:19:45.717: INFO: (9) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 44.602425ms) Mar 2 00:19:45.717: INFO: (9) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test<... (200; 25.086625ms) Mar 2 00:19:45.766: INFO: (10) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 30.999591ms) Mar 2 00:19:45.768: INFO: (10) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 33.227801ms) Mar 2 00:19:45.779: INFO: (10) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 43.6939ms) Mar 2 00:19:45.779: INFO: (10) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: ... (200; 36.829558ms) Mar 2 00:19:45.840: INFO: (11) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 48.73011ms) Mar 2 00:19:45.840: INFO: (11) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test (200; 48.934058ms) Mar 2 00:19:45.840: INFO: (11) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 49.03022ms) Mar 2 00:19:45.841: INFO: (11) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 49.281286ms) Mar 2 00:19:45.841: INFO: (11) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 49.413889ms) Mar 2 00:19:45.841: INFO: (11) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 49.511685ms) Mar 2 00:19:45.844: INFO: (11) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 52.641466ms) Mar 2 00:19:45.846: INFO: (11) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 55.187439ms) Mar 2 00:19:45.846: INFO: (11) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 55.236252ms) Mar 2 00:19:45.849: INFO: (11) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 57.77958ms) Mar 2 00:19:45.850: INFO: (11) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 58.852547ms) Mar 2 00:19:45.851: INFO: (11) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 59.929702ms) Mar 2 00:19:45.884: INFO: (12) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 32.830195ms) Mar 2 00:19:45.888: INFO: (12) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: ... (200; 49.206415ms) Mar 2 00:19:45.901: INFO: (12) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 49.491807ms) Mar 2 00:19:45.901: INFO: (12) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 49.628526ms) Mar 2 00:19:45.901: INFO: (12) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 49.753914ms) Mar 2 00:19:45.901: INFO: (12) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 49.861789ms) Mar 2 00:19:45.902: INFO: (12) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 50.372248ms) Mar 2 00:19:45.902: INFO: (12) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 50.461377ms) Mar 2 00:19:45.902: INFO: (12) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 50.428976ms) Mar 2 00:19:45.902: INFO: (12) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 50.762649ms) Mar 2 00:19:45.906: INFO: (12) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 54.624641ms) Mar 2 00:19:45.909: INFO: (12) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 57.89524ms) Mar 2 00:19:45.910: INFO: (12) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 58.253479ms) Mar 2 00:19:45.915: INFO: (12) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 63.263822ms) Mar 2 00:19:45.915: INFO: (12) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 63.487175ms) Mar 2 00:19:45.958: INFO: (13) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 43.442894ms) Mar 2 00:19:45.961: INFO: (13) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 46.564108ms) Mar 2 00:19:45.979: INFO: (13) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 63.813626ms) Mar 2 00:19:45.981: INFO: (13) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 65.925424ms) Mar 2 00:19:45.981: INFO: (13) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: ... (200; 66.292571ms) Mar 2 00:19:45.981: INFO: (13) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 66.670679ms) Mar 2 00:19:45.982: INFO: (13) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 66.829391ms) Mar 2 00:19:45.982: INFO: (13) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 67.219932ms) Mar 2 00:19:46.015: INFO: (13) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 100.614008ms) Mar 2 00:19:46.017: INFO: (13) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 101.930308ms) Mar 2 00:19:46.017: INFO: (13) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 101.877207ms) Mar 2 00:19:46.017: INFO: (13) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 102.07415ms) Mar 2 00:19:46.017: INFO: (13) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 102.136488ms) Mar 2 00:19:46.017: INFO: (13) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 102.567477ms) Mar 2 00:19:46.063: INFO: (14) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 45.883005ms) Mar 2 00:19:46.064: INFO: (14) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 46.938238ms) Mar 2 00:19:46.065: INFO: (14) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 47.183594ms) Mar 2 00:19:46.077: INFO: (14) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 59.084096ms) Mar 2 00:19:46.077: INFO: (14) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 59.293214ms) Mar 2 00:19:46.077: INFO: (14) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 59.511186ms) Mar 2 00:19:46.077: INFO: (14) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 59.610776ms) Mar 2 00:19:46.077: INFO: (14) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 59.983152ms) Mar 2 00:19:46.077: INFO: (14) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test<... (200; 50.311713ms) Mar 2 00:19:46.148: INFO: (15) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test (200; 77.713803ms) Mar 2 00:19:46.164: INFO: (15) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 77.733341ms) Mar 2 00:19:46.165: INFO: (15) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 78.141446ms) Mar 2 00:19:46.165: INFO: (15) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 78.249471ms) Mar 2 00:19:46.166: INFO: (15) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 79.131003ms) Mar 2 00:19:46.166: INFO: (15) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 79.429761ms) Mar 2 00:19:46.169: INFO: (15) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 82.113496ms) Mar 2 00:19:46.169: INFO: (15) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 82.53709ms) Mar 2 00:19:46.169: INFO: (15) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 82.893987ms) Mar 2 00:19:46.176: INFO: (15) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 89.328955ms) Mar 2 00:19:46.178: INFO: (15) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 91.287583ms) Mar 2 00:19:46.179: INFO: (15) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 92.972211ms) Mar 2 00:19:46.233: INFO: (16) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 53.195748ms) Mar 2 00:19:46.237: INFO: (16) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 57.175213ms) Mar 2 00:19:46.237: INFO: (16) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: ... (200; 71.246697ms) Mar 2 00:19:46.253: INFO: (16) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 72.988553ms) Mar 2 00:19:46.253: INFO: (16) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 73.282681ms) Mar 2 00:19:46.253: INFO: (16) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 73.532486ms) Mar 2 00:19:46.253: INFO: (16) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 73.638476ms) Mar 2 00:19:46.253: INFO: (16) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 73.904562ms) Mar 2 00:19:46.254: INFO: (16) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 74.461459ms) Mar 2 00:19:46.258: INFO: (16) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 78.466142ms) Mar 2 00:19:46.269: INFO: (16) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 89.633603ms) Mar 2 00:19:46.279: INFO: (16) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 99.744246ms) Mar 2 00:19:46.281: INFO: (16) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 101.205451ms) Mar 2 00:19:46.284: INFO: (16) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 104.607078ms) Mar 2 00:19:46.284: INFO: (16) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 104.8499ms) Mar 2 00:19:46.328: INFO: (17) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 43.580434ms) Mar 2 00:19:46.347: INFO: (17) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 62.195724ms) Mar 2 00:19:46.364: INFO: (17) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 79.303502ms) Mar 2 00:19:46.364: INFO: (17) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 79.440832ms) Mar 2 00:19:46.365: INFO: (17) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 79.986598ms) Mar 2 00:19:46.365: INFO: (17) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:460/proxy/: tls baz (200; 80.000305ms) Mar 2 00:19:46.365: INFO: (17) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 80.241923ms) Mar 2 00:19:46.365: INFO: (17) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 80.834488ms) Mar 2 00:19:46.367: INFO: (17) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: ... (200; 83.460934ms) Mar 2 00:19:46.393: INFO: (17) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 108.787736ms) Mar 2 00:19:46.399: INFO: (17) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 115.026189ms) Mar 2 00:19:46.407: INFO: (17) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 122.3909ms) Mar 2 00:19:46.407: INFO: (17) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 122.506048ms) Mar 2 00:19:46.407: INFO: (17) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 122.832839ms) Mar 2 00:19:46.408: INFO: (17) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 123.025114ms) Mar 2 00:19:46.467: INFO: (18) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 59.660962ms) Mar 2 00:19:46.469: INFO: (18) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6/proxy/: test (200; 61.183262ms) Mar 2 00:19:46.487: INFO: (18) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 79.804843ms) Mar 2 00:19:46.488: INFO: (18) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 80.193201ms) Mar 2 00:19:46.488: INFO: (18) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 80.391146ms) Mar 2 00:19:46.488: INFO: (18) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:160/proxy/: foo (200; 80.761519ms) Mar 2 00:19:46.488: INFO: (18) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: ... (200; 81.007947ms) Mar 2 00:19:46.490: INFO: (18) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 82.25225ms) Mar 2 00:19:46.493: INFO: (18) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 85.303011ms) Mar 2 00:19:46.512: INFO: (18) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 104.554839ms) Mar 2 00:19:46.525: INFO: (18) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 117.026576ms) Mar 2 00:19:46.525: INFO: (18) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 117.393131ms) Mar 2 00:19:46.525: INFO: (18) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 117.543096ms) Mar 2 00:19:46.526: INFO: (18) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 118.880125ms) Mar 2 00:19:46.558: INFO: (19) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:160/proxy/: foo (200; 31.245237ms) Mar 2 00:19:46.563: INFO: (19) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:1080/proxy/: test<... (200; 36.274925ms) Mar 2 00:19:46.582: INFO: (19) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:1080/proxy/: ... (200; 55.908509ms) Mar 2 00:19:46.583: INFO: (19) /api/v1/namespaces/proxy-1579/pods/proxy-service-wt44l-85pp6:162/proxy/: bar (200; 56.622694ms) Mar 2 00:19:46.589: INFO: (19) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:443/proxy/: test (200; 67.769535ms) Mar 2 00:19:46.595: INFO: (19) /api/v1/namespaces/proxy-1579/pods/https:proxy-service-wt44l-85pp6:462/proxy/: tls qux (200; 68.572911ms) Mar 2 00:19:46.596: INFO: (19) /api/v1/namespaces/proxy-1579/pods/http:proxy-service-wt44l-85pp6:162/proxy/: bar (200; 68.971047ms) Mar 2 00:19:46.608: INFO: (19) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname1/proxy/: foo (200; 81.337623ms) Mar 2 00:19:46.608: INFO: (19) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname2/proxy/: bar (200; 81.726872ms) Mar 2 00:19:46.609: INFO: (19) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname2/proxy/: tls qux (200; 82.21018ms) Mar 2 00:19:46.609: INFO: (19) /api/v1/namespaces/proxy-1579/services/https:proxy-service-wt44l:tlsportname1/proxy/: tls baz (200; 82.87363ms) Mar 2 00:19:46.610: INFO: (19) /api/v1/namespaces/proxy-1579/services/proxy-service-wt44l:portname1/proxy/: foo (200; 83.073549ms) Mar 2 00:19:46.614: INFO: (19) /api/v1/namespaces/proxy-1579/services/http:proxy-service-wt44l:portname2/proxy/: bar (200; 87.27754ms) STEP: deleting ReplicationController proxy-service-wt44l in namespace proxy-1579, will wait for the garbage collector to delete the pods Mar 2 00:19:46.678: INFO: Deleting ReplicationController proxy-service-wt44l took: 7.462628ms Mar 2 00:19:46.779: INFO: Terminating ReplicationController proxy-service-wt44l pods took: 100.983409ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:02.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1579" for this suite. • [SLOW TEST:41.824 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":8,"skipped":96,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:51.664: INFO: >>> kubeConfig: /tmp/kubeconfig-446130155 STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:19:51.699: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140" in namespace "security-context-test-6082" to be "Succeeded or Failed" Mar 2 00:19:51.707: INFO: Pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140": Phase="Pending", Reason="", readiness=false. Elapsed: 8.297914ms Mar 2 00:19:53.712: INFO: Pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012721421s Mar 2 00:19:55.717: INFO: Pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017909305s Mar 2 00:19:57.722: INFO: Pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023377329s Mar 2 00:19:59.727: INFO: Pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140": Phase="Pending", Reason="", readiness=false. Elapsed: 8.028272902s Mar 2 00:20:01.732: INFO: Pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140": Phase="Pending", Reason="", readiness=false. Elapsed: 10.033027224s Mar 2 00:20:03.737: INFO: Pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140": Phase="Pending", Reason="", readiness=false. Elapsed: 12.038432649s Mar 2 00:20:05.742: INFO: Pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.043102513s Mar 2 00:20:05.742: INFO: Pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140" satisfied condition "Succeeded or Failed" Mar 2 00:20:05.750: INFO: Got logs for pod "busybox-privileged-false-1b358dd4-807f-4ecb-837f-7ec38aa1e140": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:05.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6082" for this suite. • [SLOW TEST:14.096 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":72,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:47.836: INFO: >>> kubeConfig: /tmp/kubeconfig-1606110277 STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 2 00:20:05.986: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:06.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5312" for this suite. • [SLOW TEST:18.182 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":140,"failed":0} SSSSSSSSSSSSS ------------------------------ Mar 2 00:20:06.026: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:06.026: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:06.026: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:06.026: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:06.026: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:06.026: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:06.026: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:06.026: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:20.106: INFO: >>> kubeConfig: /tmp/kubeconfig-3620257963 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Mar 2 00:19:20.149: INFO: observed Pod pod-test in namespace pods-6310 in phase Pending with labels: map[test-pod-static:true] & conditions [] Mar 2 00:19:20.156: INFO: observed Pod pod-test in namespace pods-6310 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:19:20 +0000 UTC }] Mar 2 00:19:39.395: INFO: Found Pod pod-test in namespace pods-6310 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:19:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:19:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:19:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:19:20 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Mar 2 00:19:39.423: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Mar 2 00:19:39.545: INFO: observed event type ADDED Mar 2 00:19:39.545: INFO: observed event type MODIFIED Mar 2 00:19:39.545: INFO: observed event type MODIFIED Mar 2 00:19:39.545: INFO: observed event type MODIFIED Mar 2 00:19:39.545: INFO: observed event type MODIFIED Mar 2 00:19:39.557: INFO: observed event type MODIFIED Mar 2 00:20:05.986: INFO: observed event type MODIFIED Mar 2 00:20:06.386: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:06.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6310" for this suite. • [SLOW TEST:46.300 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":8,"skipped":187,"failed":0} Mar 2 00:20:06.405: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:06.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:06.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:06.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:06.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:06.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:06.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:06.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:56.541: INFO: >>> kubeConfig: /tmp/kubeconfig-1722668237 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name configmap-projected-all-test-volume-96c4d12a-1c8e-4655-8a41-d99b2d50bb87 STEP: Creating secret with name secret-projected-all-test-volume-9aaf1437-5a22-4690-9467-696d7dfd7769 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 2 00:19:56.609: INFO: Waiting up to 5m0s for pod "projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517" in namespace "projected-59" to be "Succeeded or Failed" Mar 2 00:19:56.613: INFO: Pod "projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215032ms Mar 2 00:19:58.618: INFO: Pod "projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009110321s Mar 2 00:20:00.627: INFO: Pod "projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017953251s Mar 2 00:20:02.634: INFO: Pod "projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024793867s Mar 2 00:20:04.638: INFO: Pod "projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029370171s Mar 2 00:20:06.644: INFO: Pod "projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.034969112s STEP: Saw pod success Mar 2 00:20:06.644: INFO: Pod "projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517" satisfied condition "Succeeded or Failed" Mar 2 00:20:06.650: INFO: Trying to get logs from node k3s-server-1-mid5l7 pod projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517 container projected-all-volume-test: STEP: delete the pod Mar 2 00:20:06.673: INFO: Waiting for pod projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517 to disappear Mar 2 00:20:06.681: INFO: Pod projected-volume-f74cef99-c995-4fa0-b9d1-cb56a916c517 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:06.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-59" for this suite. • [SLOW TEST:10.149 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":208,"failed":0} Mar 2 00:20:06.691: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:06.691: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:06.691: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:06.691: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:06.691: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:06.691: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:06.691: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:06.691: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:56.519: INFO: >>> kubeConfig: /tmp/kubeconfig-428285787 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Mar 2 00:19:56.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-428285787 --namespace=kubectl-3235 create -f -' Mar 2 00:19:56.658: INFO: stderr: "" Mar 2 00:19:56.658: INFO: stdout: "pod/pause created\n" Mar 2 00:19:56.658: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 2 00:19:56.658: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3235" to be "running and ready" Mar 2 00:19:56.668: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.829691ms Mar 2 00:19:58.673: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015216874s Mar 2 00:20:00.677: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019347799s Mar 2 00:20:02.683: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024995159s Mar 2 00:20:04.687: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029396952s Mar 2 00:20:06.691: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.033574485s Mar 2 00:20:06.691: INFO: Pod "pause" satisfied condition "running and ready" Mar 2 00:20:06.691: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: adding the label testing-label with value testing-label-value to a pod Mar 2 00:20:06.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-428285787 --namespace=kubectl-3235 label pods pause testing-label=testing-label-value' Mar 2 00:20:06.742: INFO: stderr: "" Mar 2 00:20:06.742: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 2 00:20:06.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-428285787 --namespace=kubectl-3235 get pod pause -L testing-label' Mar 2 00:20:06.791: INFO: stderr: "" Mar 2 00:20:06.791: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 2 00:20:06.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-428285787 --namespace=kubectl-3235 label pods pause testing-label-' Mar 2 00:20:06.843: INFO: stderr: "" Mar 2 00:20:06.844: INFO: stdout: "pod/pause unlabeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 2 00:20:06.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-428285787 --namespace=kubectl-3235 get pod pause -L testing-label' Mar 2 00:20:06.887: INFO: stderr: "" Mar 2 00:20:06.888: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1339 STEP: using delete to clean up resources Mar 2 00:20:06.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-428285787 --namespace=kubectl-3235 delete --grace-period=0 --force -f -' Mar 2 00:20:06.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 2 00:20:06.943: INFO: stdout: "pod \"pause\" force deleted\n" Mar 2 00:20:06.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-428285787 --namespace=kubectl-3235 get rc,svc -l name=pause --no-headers' Mar 2 00:20:07.000: INFO: stderr: "No resources found in kubectl-3235 namespace.\n" Mar 2 00:20:07.000: INFO: stdout: "" Mar 2 00:20:07.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-428285787 --namespace=kubectl-3235 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 2 00:20:07.048: INFO: stderr: "" Mar 2 00:20:07.048: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:07.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3235" for this suite. • [SLOW TEST:10.538 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1331 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":8,"skipped":138,"failed":0} Mar 2 00:20:07.058: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:07.058: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:07.058: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:07.058: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:07.058: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:07.058: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:07.058: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:07.058: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:47.088: INFO: >>> kubeConfig: /tmp/kubeconfig-2977999873 STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating secret with name secret-test-1030408d-fbf8-4862-b0e2-2d8cde82da16 STEP: Creating a pod to test consume secrets Mar 2 00:19:47.217: INFO: Waiting up to 5m0s for pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc" in namespace "secrets-3693" to be "Succeeded or Failed" Mar 2 00:19:47.228: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.753665ms Mar 2 00:19:49.233: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015284361s Mar 2 00:19:51.237: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019436464s Mar 2 00:19:53.243: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025934698s Mar 2 00:19:55.248: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031137079s Mar 2 00:19:57.256: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.038663368s Mar 2 00:19:59.262: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.044502241s Mar 2 00:20:01.267: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.049971909s Mar 2 00:20:03.272: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.055072404s Mar 2 00:20:05.277: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.05945418s Mar 2 00:20:07.281: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.063757764s STEP: Saw pod success Mar 2 00:20:07.281: INFO: Pod "pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc" satisfied condition "Succeeded or Failed" Mar 2 00:20:07.284: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc container secret-volume-test: STEP: delete the pod Mar 2 00:20:07.307: INFO: Waiting for pod pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc to disappear Mar 2 00:20:07.313: INFO: Pod pod-secrets-539eccd8-8aff-4bb0-bef1-f7c0d4e010fc no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:07.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3693" for this suite. • [SLOW TEST:20.234 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":329,"failed":0} Mar 2 00:20:07.323: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:07.323: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:07.323: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:07.323: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:07.323: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:07.323: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:07.323: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:07.323: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:24.962: INFO: >>> kubeConfig: /tmp/kubeconfig-1801756564 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d01308a5-25e3-49f3-b581-69df9a4adf8b STEP: Creating the pod Mar 2 00:17:25.000: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:27.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:29.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:31.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:33.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:35.006: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:37.005: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:39.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:41.010: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:43.005: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:45.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:47.006: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:49.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:51.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:53.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:55.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:57.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:59.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:01.005: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:03.005: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:05.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:07.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:09.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:11.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:13.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:15.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:17.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:19.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:21.026: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:23.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:25.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:27.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:29.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:31.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:33.005: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:35.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:37.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:39.003: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:41.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:43.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:45.006: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:47.005: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:49.004: INFO: The status of Pod pod-projected-configmaps-0302f748-5aa5-45d5-a088-51a8946faae7 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-d01308a5-25e3-49f3-b581-69df9a4adf8b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:07.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5752" for this suite. • [SLOW TEST:162.742 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":84,"failed":0} Mar 2 00:20:07.704: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:07.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:07.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:07.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:07.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:07.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:07.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:07.704: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:42.079: INFO: >>> kubeConfig: /tmp/kubeconfig-1090799654 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating Agnhost RC Mar 2 00:19:42.203: INFO: namespace kubectl-7294 Mar 2 00:19:42.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1090799654 --namespace=kubectl-7294 create -f -' Mar 2 00:19:42.653: INFO: stderr: "" Mar 2 00:19:42.653: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Mar 2 00:19:43.658: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:43.658: INFO: Found 0 / 1 Mar 2 00:19:44.657: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:44.657: INFO: Found 0 / 1 Mar 2 00:19:45.669: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:45.669: INFO: Found 0 / 1 Mar 2 00:19:46.661: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:46.661: INFO: Found 0 / 1 Mar 2 00:19:47.660: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:47.660: INFO: Found 0 / 1 Mar 2 00:19:48.658: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:48.658: INFO: Found 0 / 1 Mar 2 00:19:49.657: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:49.657: INFO: Found 0 / 1 Mar 2 00:19:50.656: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:50.656: INFO: Found 0 / 1 Mar 2 00:19:51.658: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:51.658: INFO: Found 0 / 1 Mar 2 00:19:52.657: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:52.657: INFO: Found 0 / 1 Mar 2 00:19:53.657: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:53.657: INFO: Found 0 / 1 Mar 2 00:19:54.656: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:54.656: INFO: Found 0 / 1 Mar 2 00:19:55.657: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:55.657: INFO: Found 0 / 1 Mar 2 00:19:56.660: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:56.660: INFO: Found 0 / 1 Mar 2 00:19:57.656: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:57.656: INFO: Found 0 / 1 Mar 2 00:19:58.657: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:58.657: INFO: Found 0 / 1 Mar 2 00:19:59.660: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:19:59.660: INFO: Found 0 / 1 Mar 2 00:20:00.657: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:20:00.657: INFO: Found 0 / 1 Mar 2 00:20:01.660: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:20:01.660: INFO: Found 0 / 1 Mar 2 00:20:02.657: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:20:02.657: INFO: Found 0 / 1 Mar 2 00:20:03.657: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:20:03.657: INFO: Found 1 / 1 Mar 2 00:20:03.657: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 2 00:20:03.661: INFO: Selector matched 1 pods for map[app:agnhost] Mar 2 00:20:03.661: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 2 00:20:03.661: INFO: wait on agnhost-primary startup in kubectl-7294 Mar 2 00:20:03.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1090799654 --namespace=kubectl-7294 logs agnhost-primary-j2bwj agnhost-primary' Mar 2 00:20:03.711: INFO: stderr: "" Mar 2 00:20:03.711: INFO: stdout: "Paused\n" STEP: exposing RC Mar 2 00:20:03.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1090799654 --namespace=kubectl-7294 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Mar 2 00:20:03.770: INFO: stderr: "" Mar 2 00:20:03.770: INFO: stdout: "service/rm2 exposed\n" Mar 2 00:20:03.776: INFO: Service rm2 in namespace kubectl-7294 found. STEP: exposing service Mar 2 00:20:05.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1090799654 --namespace=kubectl-7294 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Mar 2 00:20:05.883: INFO: stderr: "" Mar 2 00:20:05.883: INFO: stdout: "service/rm3 exposed\n" Mar 2 00:20:05.895: INFO: Service rm3 in namespace kubectl-7294 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:07.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7294" for this suite. • [SLOW TEST:25.845 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1248 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":4,"skipped":148,"failed":0} Mar 2 00:20:07.925: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:07.925: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:07.925: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:07.925: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:07.925: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:07.925: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:07.925: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:07.925: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:36.913: INFO: >>> kubeConfig: /tmp/kubeconfig-4203063242 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:19:37.291: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:19:39.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:41.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:43.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:45.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:47.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:49.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:51.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:53.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:55.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:57.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:59.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:01.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:03.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:05.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:20:08.356: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:08.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1799" for this suite. STEP: Destroying namespace "webhook-1799-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:31.556 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":13,"skipped":255,"failed":0} Mar 2 00:20:08.470: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:08.470: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:08.470: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:08.470: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:08.470: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:08.470: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:08.470: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:08.470: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:42.694: INFO: >>> kubeConfig: /tmp/kubeconfig-3970272666 STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:19:42.892: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:44.903: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:46.908: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:48.896: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:50.897: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:52.900: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:54.896: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:56.896: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:58.898: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:00.897: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:02.897: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:04.897: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:06.897: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:08.896: INFO: The status of Pod busybox-scheduling-5f04b2fa-fa87-4c64-97b3-72f3a73e00ca is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:08.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8377" for this suite. • [SLOW TEST:26.222 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":134,"failed":0} Mar 2 00:20:08.916: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:08.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:08.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:08.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:08.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:08.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:08.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:08.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:27.774: INFO: >>> kubeConfig: /tmp/kubeconfig-3656243505 STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:19:57.858: INFO: Deleting pod "var-expansion-b35913ee-5afd-4026-b5d6-7521e1f3286e" in namespace "var-expansion-561" Mar 2 00:19:57.864: INFO: Wait up to 5m0s for pod "var-expansion-b35913ee-5afd-4026-b5d6-7521e1f3286e" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:09.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-561" for this suite. • [SLOW TEST:42.107 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":9,"skipped":103,"failed":0} Mar 2 00:20:09.882: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:09.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:09.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:09.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:09.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:09.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:09.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:09.882: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:50.154: INFO: >>> kubeConfig: /tmp/kubeconfig-528366706 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating configMap with name projected-configmap-test-volume-5b9ed296-81d4-440d-9ace-5630a1595768 STEP: Creating a pod to test consume configMaps Mar 2 00:19:50.191: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6" in namespace "projected-8404" to be "Succeeded or Failed" Mar 2 00:19:50.196: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.213127ms Mar 2 00:19:52.201: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009669882s Mar 2 00:19:54.205: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013523418s Mar 2 00:19:56.213: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021680372s Mar 2 00:19:58.218: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027233438s Mar 2 00:20:00.223: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.032176844s Mar 2 00:20:02.228: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.037266404s Mar 2 00:20:04.232: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.04070721s Mar 2 00:20:06.236: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.045188479s Mar 2 00:20:08.242: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.050789931s Mar 2 00:20:10.246: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.055329403s STEP: Saw pod success Mar 2 00:20:10.246: INFO: Pod "pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6" satisfied condition "Succeeded or Failed" Mar 2 00:20:10.250: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6 container agnhost-container: STEP: delete the pod Mar 2 00:20:10.268: INFO: Waiting for pod pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6 to disappear Mar 2 00:20:10.273: INFO: Pod pod-projected-configmaps-12a98a06-5109-4502-aab1-66e4cbb26fc6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:10.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8404" for this suite. • [SLOW TEST:20.129 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":257,"failed":0} Mar 2 00:20:10.284: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:10.284: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:10.284: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:10.284: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:10.284: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:10.284: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:10.284: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:10.284: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:17:18.785: INFO: >>> kubeConfig: /tmp/kubeconfig-1867053252 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating service in namespace services-5763 Mar 2 00:17:18.810: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:20.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:22.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:24.816: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:26.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:28.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:30.815: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:32.815: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:34.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:36.816: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:38.816: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:40.822: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:42.816: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:44.815: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:46.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:48.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:50.815: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:52.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:54.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:56.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:17:58.815: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:00.815: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:02.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:04.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:06.815: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:08.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:10.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:12.815: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:14.813: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:16.815: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:18.814: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:20.813: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Mar 2 00:18:20.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1867053252 --namespace=services-5763 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Mar 2 00:18:20.920: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Mar 2 00:18:20.920: INFO: stdout: "iptables" Mar 2 00:18:20.920: INFO: proxyMode: iptables Mar 2 00:18:20.930: INFO: Waiting for pod kube-proxy-mode-detector to disappear Mar 2 00:18:20.935: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-5763 STEP: creating replication controller affinity-clusterip-timeout in namespace services-5763 I0302 00:18:20.956435 291 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5763, replica count: 3 I0302 00:18:24.007773 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:27.009851 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:30.010913 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:33.011258 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:36.013481 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:39.014563 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:42.014919 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:45.016150 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:48.018243 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:51.018529 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:54.019351 291 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:18:54.025: INFO: Creating new exec pod Mar 2 00:19:33.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1867053252 --namespace=services-5763 exec execpod-affinityb5ltj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Mar 2 00:19:33.228: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Mar 2 00:19:33.228: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:19:33.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1867053252 --namespace=services-5763 exec execpod-affinityb5ltj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.45.0 80' Mar 2 00:19:33.372: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.45.0 80\nConnection to 10.43.45.0 80 port [tcp/http] succeeded!\n" Mar 2 00:19:33.372: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:19:33.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1867053252 --namespace=services-5763 exec execpod-affinityb5ltj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.43.45.0:80/ ; done' Mar 2 00:19:33.575: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n" Mar 2 00:19:33.575: INFO: stdout: "\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m\naffinity-clusterip-timeout-r2n7m" Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Received response from host: affinity-clusterip-timeout-r2n7m Mar 2 00:19:33.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1867053252 --namespace=services-5763 exec execpod-affinityb5ltj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.43.45.0:80/' Mar 2 00:19:33.670: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n" Mar 2 00:19:33.670: INFO: stdout: "affinity-clusterip-timeout-r2n7m" Mar 2 00:19:53.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1867053252 --namespace=services-5763 exec execpod-affinityb5ltj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.43.45.0:80/' Mar 2 00:19:53.788: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.43.45.0:80/\n" Mar 2 00:19:53.788: INFO: stdout: "affinity-clusterip-timeout-vzmpm" Mar 2 00:19:53.788: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5763, will wait for the garbage collector to delete the pods Mar 2 00:19:53.882: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.790954ms Mar 2 00:19:53.983: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.242935ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:10.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5763" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:171.943 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0} Mar 2 00:20:10.729: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:10.730: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:10.730: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:10.730: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:10.730: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:10.730: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:10.730: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:10.730: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:54.700: INFO: >>> kubeConfig: /tmp/kubeconfig-656862609 STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:10.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9740" for this suite. • [SLOW TEST:16.187 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":8,"skipped":122,"failed":0} Mar 2 00:20:10.887: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:10.887: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:10.887: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:10.887: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:10.887: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:10.887: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:10.887: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:10.887: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:18.834: INFO: >>> kubeConfig: /tmp/kubeconfig-2625432154 STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod pod-subpath-test-secret-bm5b STEP: Creating a pod to test atomic-volume-subpath Mar 2 00:19:18.911: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-bm5b" in namespace "subpath-4024" to be "Succeeded or Failed" Mar 2 00:19:18.919: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.513184ms Mar 2 00:19:20.928: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016845144s Mar 2 00:19:22.933: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021284734s Mar 2 00:19:24.937: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025566893s Mar 2 00:19:26.942: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031126814s Mar 2 00:19:28.950: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.038977843s Mar 2 00:19:30.955: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0432584s Mar 2 00:19:32.960: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.048947928s Mar 2 00:19:34.966: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.054424079s Mar 2 00:19:36.984: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.072264088s Mar 2 00:19:39.009: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.098136584s Mar 2 00:19:41.021: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.10963561s Mar 2 00:19:43.039: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.127685599s Mar 2 00:19:45.043: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.131765234s Mar 2 00:19:47.056: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 28.144993723s Mar 2 00:19:49.060: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 30.148921243s Mar 2 00:19:51.065: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 32.153706598s Mar 2 00:19:53.078: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 34.166440329s Mar 2 00:19:55.083: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 36.171478388s Mar 2 00:19:57.092: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 38.180313241s Mar 2 00:19:59.098: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 40.186778923s Mar 2 00:20:01.102: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 42.19112112s Mar 2 00:20:03.107: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 44.195565048s Mar 2 00:20:05.115: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 46.203420779s Mar 2 00:20:07.119: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 48.208240542s Mar 2 00:20:09.124: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Running", Reason="", readiness=true. Elapsed: 50.212495531s Mar 2 00:20:11.128: INFO: Pod "pod-subpath-test-secret-bm5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.216547881s STEP: Saw pod success Mar 2 00:20:11.128: INFO: Pod "pod-subpath-test-secret-bm5b" satisfied condition "Succeeded or Failed" Mar 2 00:20:11.131: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-subpath-test-secret-bm5b container test-container-subpath-secret-bm5b: STEP: delete the pod Mar 2 00:20:11.152: INFO: Waiting for pod pod-subpath-test-secret-bm5b to disappear Mar 2 00:20:11.161: INFO: Pod pod-subpath-test-secret-bm5b no longer exists STEP: Deleting pod pod-subpath-test-secret-bm5b Mar 2 00:20:11.161: INFO: Deleting pod "pod-subpath-test-secret-bm5b" in namespace "subpath-4024" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:11.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4024" for this suite. • [SLOW TEST:52.340 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":11,"skipped":225,"failed":0} Mar 2 00:20:11.174: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:11.174: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:11.174: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:11.174: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:11.174: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:11.174: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:11.174: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:11.174: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:47.220: INFO: >>> kubeConfig: /tmp/kubeconfig-394016705 STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:20:05.366: INFO: Deleting pod "var-expansion-18c79c0b-0286-4866-927e-e8025fe94a24" in namespace "var-expansion-8522" Mar 2 00:20:05.372: INFO: Wait up to 5m0s for pod "var-expansion-18c79c0b-0286-4866-927e-e8025fe94a24" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:11.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8522" for this suite. • [SLOW TEST:24.184 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":7,"skipped":118,"failed":0} Mar 2 00:20:11.405: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:11.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:11.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:11.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:11.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:11.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:11.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:11.405: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:20:01.316: INFO: >>> kubeConfig: /tmp/kubeconfig-2153458730 STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1361.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1361.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1361.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1361.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 2 00:20:11.371: INFO: DNS probes using dns-1361/dns-test-ee7c5d74-4adf-4e87-932f-7f605445acdf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:11.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1361" for this suite. • [SLOW TEST:10.094 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0} Mar 2 00:20:11.410: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:11.411: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:11.411: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:11.411: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:11.411: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:11.411: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:11.411: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:11.411: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:53.757: INFO: >>> kubeConfig: /tmp/kubeconfig-3654524011 STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating projection with secret that has name projected-secret-test-map-e559e2bf-2c62-4b35-b273-2c07fffe9755 STEP: Creating a pod to test consume secrets Mar 2 00:19:53.809: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd" in namespace "projected-6183" to be "Succeeded or Failed" Mar 2 00:19:53.815: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.660605ms Mar 2 00:19:55.821: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012425345s Mar 2 00:19:57.825: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016205615s Mar 2 00:19:59.830: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021334241s Mar 2 00:20:01.838: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029520509s Mar 2 00:20:03.842: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.033664329s Mar 2 00:20:05.866: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.057093157s Mar 2 00:20:07.871: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.062109647s Mar 2 00:20:09.875: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.066219582s Mar 2 00:20:11.881: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.072720272s STEP: Saw pod success Mar 2 00:20:11.881: INFO: Pod "pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd" satisfied condition "Succeeded or Failed" Mar 2 00:20:11.885: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd container projected-secret-volume-test: STEP: delete the pod Mar 2 00:20:11.926: INFO: Waiting for pod pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd to disappear Mar 2 00:20:11.941: INFO: Pod pod-projected-secrets-18fbe85b-5156-4e59-a209-159c4e0ff8bd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:11.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6183" for this suite. • [SLOW TEST:18.197 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":131,"failed":0} Mar 2 00:20:11.955: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:11.955: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:11.955: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:11.955: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:11.955: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:11.955: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:11.955: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:11.955: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:20:05.762: INFO: >>> kubeConfig: /tmp/kubeconfig-446130155 STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:20:05.779: INFO: Creating deployment "test-recreate-deployment" Mar 2 00:20:05.790: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 2 00:20:05.811: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 2 00:20:07.820: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 2 00:20:07.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 20, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 20, 5, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 20, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 20, 5, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-594f666cd9\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:09.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 20, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 20, 5, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 20, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 20, 5, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-594f666cd9\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:11.830: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 2 00:20:11.840: INFO: Updating deployment test-recreate-deployment Mar 2 00:20:11.840: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Mar 2 00:20:11.963: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9352 31e47591-d797-468c-aefd-f2ae5dbd4725 15950 2 2023-03-02 00:20:05 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-03-02 00:20:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:20:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004cbd8b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-03-02 00:20:11 +0000 UTC,LastTransitionTime:2023-03-02 00:20:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5b99bd5487" is progressing.,LastUpdateTime:2023-03-02 00:20:11 +0000 UTC,LastTransitionTime:2023-03-02 00:20:05 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 2 00:20:11.967: INFO: New ReplicaSet "test-recreate-deployment-5b99bd5487" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5b99bd5487 deployment-9352 25963159-e60e-4ff4-910c-63f83fd8d530 15948 1 2023-03-02 00:20:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 31e47591-d797-468c-aefd-f2ae5dbd4725 0xc00134b007 0xc00134b008}] [] [{k3s Update apps/v1 2023-03-02 00:20:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31e47591-d797-468c-aefd-f2ae5dbd4725\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:20:11 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5b99bd5487,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00134b0c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:20:11.967: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 2 00:20:11.967: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-594f666cd9 deployment-9352 c63d0a4e-5d97-4689-83fc-f5fd117b8d5b 15936 2 2023-03-02 00:20:05 +0000 UTC map[name:sample-pod-3 pod-template-hash:594f666cd9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 31e47591-d797-468c-aefd-f2ae5dbd4725 0xc00134aec7 0xc00134aec8}] [] [{k3s Update apps/v1 2023-03-02 00:20:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31e47591-d797-468c-aefd-f2ae5dbd4725\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:20:11 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 594f666cd9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:594f666cd9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00134af98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:20:11.971: INFO: Pod "test-recreate-deployment-5b99bd5487-t8ftd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5b99bd5487-t8ftd test-recreate-deployment-5b99bd5487- deployment-9352 affc2d52-0e30-4d7b-9986-25c2006ba593 15943 0 2023-03-02 00:20:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5b99bd5487 25963159-e60e-4ff4-910c-63f83fd8d530 0xc000a3fc67 0xc000a3fc68}] [] [{k3s Update v1 2023-03-02 00:20:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25963159-e60e-4ff4-910c-63f83fd8d530\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wd92g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wd92g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:11.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9352" for this suite. • [SLOW TEST:6.222 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":73,"failed":0} Mar 2 00:20:11.984: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:11.984: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:11.984: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:11.984: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:11.984: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:11.984: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:11.984: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:11.984: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:42.320: INFO: >>> kubeConfig: /tmp/kubeconfig-3780238515 STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:14.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7286" for this suite. • [SLOW TEST:32.158 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":6,"skipped":50,"failed":0} Mar 2 00:20:14.479: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:14.479: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:14.479: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:14.479: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:14.479: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:14.479: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:14.479: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:14.479: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.697: INFO: >>> kubeConfig: /tmp/kubeconfig-3761205576 STEP: Building a namespace api object, basename container-probe W0302 00:15:01.994705 421 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:01.994: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod liveness-56957831-0b06-4189-9a80-834657d2d97d in namespace container-probe-5565 Mar 2 00:16:14.067: INFO: Started pod liveness-56957831-0b06-4189-9a80-834657d2d97d in namespace container-probe-5565 STEP: checking the pod's current state and verifying that restartCount is present Mar 2 00:16:14.069: INFO: Initial restart count of pod liveness-56957831-0b06-4189-9a80-834657d2d97d is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:14.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5565" for this suite. • [SLOW TEST:313.048 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0} Mar 2 00:20:14.746: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:14.746: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:14.746: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:14.746: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:14.746: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:14.746: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:14.746: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:14.746: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:10.997: INFO: >>> kubeConfig: /tmp/kubeconfig-1252640407 STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:19:11.028: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:13.033: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:15.032: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:17.033: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:19.044: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:21.032: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:23.032: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:25.031: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:27.033: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:29.044: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:31.034: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:33.044: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:35.032: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:37.051: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:39.063: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:41.047: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:43.044: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:45.034: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:47.045: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:49.032: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:51.033: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:53.033: INFO: The status of Pod server-envvars-14a971d2-37a4-4b6d-990b-ece28aa20a81 is Running (Ready = true) Mar 2 00:19:53.084: INFO: Waiting up to 5m0s for pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2" in namespace "pods-6090" to be "Succeeded or Failed" Mar 2 00:19:53.093: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551464ms Mar 2 00:19:55.099: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014941119s Mar 2 00:19:57.104: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020152182s Mar 2 00:19:59.111: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026993287s Mar 2 00:20:01.115: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031373667s Mar 2 00:20:03.120: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.035670916s Mar 2 00:20:05.125: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.041256417s Mar 2 00:20:07.130: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.04558074s Mar 2 00:20:09.134: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.049966878s Mar 2 00:20:11.140: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.055660373s Mar 2 00:20:13.148: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.06400005s Mar 2 00:20:15.155: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.071424976s STEP: Saw pod success Mar 2 00:20:15.155: INFO: Pod "client-envvars-90f21065-db95-4724-a272-ac72d9c869b2" satisfied condition "Succeeded or Failed" Mar 2 00:20:15.159: INFO: Trying to get logs from node k3s-agent-1-mid5l7 pod client-envvars-90f21065-db95-4724-a272-ac72d9c869b2 container env3cont: STEP: delete the pod Mar 2 00:20:15.178: INFO: Waiting for pod client-envvars-90f21065-db95-4724-a272-ac72d9c869b2 to disappear Mar 2 00:20:15.183: INFO: Pod client-envvars-90f21065-db95-4724-a272-ac72d9c869b2 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:15.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6090" for this suite. • [SLOW TEST:64.198 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":198,"failed":0} Mar 2 00:20:15.196: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:15.196: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:15.196: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:15.196: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:15.196: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:15.196: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:15.196: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:15.196: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:38.546: INFO: >>> kubeConfig: /tmp/kubeconfig-3471101731 STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Mar 2 00:19:38.947: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 2 00:19:39.034: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 2 00:19:41.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:43.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:45.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:47.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:49.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:51.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:53.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:55.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:57.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:19:59.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:01.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:03.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:05.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:07.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 2 00:20:09.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 2, 0, 19, 39, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 2 00:20:12.099: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Mar 2 00:20:12.105: INFO: >>> kubeConfig: /tmp/kubeconfig-3471101731 STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:15.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5410" for this suite. STEP: Destroying namespace "webhook-5410-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:36.716 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":6,"skipped":116,"failed":0} Mar 2 00:20:15.263: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:15.263: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:15.263: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:15.263: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:15.263: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:15.263: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:15.263: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:15.263: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:15.102: INFO: >>> kubeConfig: /tmp/kubeconfig-3226875953 STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Mar 2 00:19:33.185: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-4744 PodName:var-expansion-055932d2-e6c6-44a1-b4e5-abb354a8e511 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:33.185: INFO: >>> kubeConfig: /tmp/kubeconfig-3226875953 Mar 2 00:19:33.185: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:33.185: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/var-expansion-4744/pods/var-expansion-055932d2-e6c6-44a1-b4e5-abb354a8e511/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) STEP: test for file in mounted path Mar 2 00:19:33.298: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-4744 PodName:var-expansion-055932d2-e6c6-44a1-b4e5-abb354a8e511 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 2 00:19:33.298: INFO: >>> kubeConfig: /tmp/kubeconfig-3226875953 Mar 2 00:19:33.298: INFO: ExecWithOptions: Clientset creation Mar 2 00:19:33.298: INFO: ExecWithOptions: execute(POST https://10.43.0.1:443/api/v1/namespaces/var-expansion-4744/pods/var-expansion-055932d2-e6c6-44a1-b4e5-abb354a8e511/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) STEP: updating the annotation value Mar 2 00:19:33.913: INFO: Successfully updated pod "var-expansion-055932d2-e6c6-44a1-b4e5-abb354a8e511" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Mar 2 00:19:33.936: INFO: Deleting pod "var-expansion-055932d2-e6c6-44a1-b4e5-abb354a8e511" in namespace "var-expansion-4744" Mar 2 00:19:33.954: INFO: Wait up to 5m0s for pod "var-expansion-055932d2-e6c6-44a1-b4e5-abb354a8e511" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:15.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4744" for this suite. • [SLOW TEST:60.901 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":5,"skipped":152,"failed":0} Mar 2 00:20:16.003: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:16.003: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:16.003: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:16.003: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:16.003: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:16.003: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:16.003: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:16.003: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:59.099: INFO: >>> kubeConfig: /tmp/kubeconfig-710959753 STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating the pod Mar 2 00:19:59.145: INFO: The status of Pod annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:01.150: INFO: The status of Pod annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:03.150: INFO: The status of Pod annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:05.151: INFO: The status of Pod annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:07.151: INFO: The status of Pod annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:09.150: INFO: The status of Pod annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:11.153: INFO: The status of Pod annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:13.152: INFO: The status of Pod annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:15.151: INFO: The status of Pod annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb is Running (Ready = true) Mar 2 00:20:15.673: INFO: Successfully updated pod "annotationupdate8f7e407f-b651-46a1-8bca-26ae98b7b9bb" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:17.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-20" for this suite. • [SLOW TEST:18.601 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":297,"failed":0} Mar 2 00:20:17.700: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:17.700: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:17.700: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:17.700: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:17.700: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:17.700: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:17.700: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:17.700: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:21.707: INFO: >>> kubeConfig: /tmp/kubeconfig-2645107774 STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 STEP: create the container to handle the HTTPGet hook request. Mar 2 00:19:21.813: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:23.818: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:25.818: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:27.830: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:29.829: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:31.817: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:33.828: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:35.817: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:37.830: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:39.828: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:41.834: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:43.820: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:45.847: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:47.819: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:49.817: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:51.823: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the pod with lifecycle hook Mar 2 00:19:51.864: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:53.867: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:55.870: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:57.870: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:59.868: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 2 00:19:59.885: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:19:59.890: INFO: Pod pod-with-poststart-http-hook still exists Mar 2 00:20:01.891: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:20:01.907: INFO: Pod pod-with-poststart-http-hook still exists Mar 2 00:20:03.891: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:20:03.895: INFO: Pod pod-with-poststart-http-hook still exists Mar 2 00:20:05.890: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:20:05.904: INFO: Pod pod-with-poststart-http-hook still exists Mar 2 00:20:07.891: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:20:07.896: INFO: Pod pod-with-poststart-http-hook still exists Mar 2 00:20:09.890: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:20:09.895: INFO: Pod pod-with-poststart-http-hook still exists Mar 2 00:20:11.890: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:20:11.897: INFO: Pod pod-with-poststart-http-hook still exists Mar 2 00:20:13.891: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:20:13.895: INFO: Pod pod-with-poststart-http-hook still exists Mar 2 00:20:15.891: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:20:15.896: INFO: Pod pod-with-poststart-http-hook still exists Mar 2 00:20:17.892: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 2 00:20:17.896: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:17.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7647" for this suite. • [SLOW TEST:56.199 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":121,"failed":0} Mar 2 00:20:17.906: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:17.907: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:17.907: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:17.907: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:17.907: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:17.907: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:17.907: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:17.907: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:33.375: INFO: >>> kubeConfig: /tmp/kubeconfig-2274336506 STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 STEP: create the container to handle the HTTPGet hook request. Mar 2 00:19:33.563: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:35.576: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:37.572: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:39.575: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:41.577: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:43.568: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:45.609: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:47.578: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: create the pod with lifecycle hook Mar 2 00:19:47.602: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:49.606: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:51.609: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:53.609: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:55.607: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:57.606: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:19:59.608: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:01.611: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:03.606: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:05.608: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:07.607: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:09.606: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:11.607: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 2 00:20:11.625: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 2 00:20:11.628: INFO: Pod pod-with-poststart-exec-hook still exists Mar 2 00:20:13.629: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 2 00:20:13.634: INFO: Pod pod-with-poststart-exec-hook still exists Mar 2 00:20:15.630: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 2 00:20:15.634: INFO: Pod pod-with-poststart-exec-hook still exists Mar 2 00:20:17.629: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 2 00:20:17.633: INFO: Pod pod-with-poststart-exec-hook still exists Mar 2 00:20:19.629: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 2 00:20:19.633: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:19.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4517" for this suite. • [SLOW TEST:46.269 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":157,"failed":0} Mar 2 00:20:19.644: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:19.644: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:19.644: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:19.644: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:19.644: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:19.644: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:19.644: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:19.644: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:35.508: INFO: >>> kubeConfig: /tmp/kubeconfig-2464080081 STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should block an eviction until the PDB is updated to allow it [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pdb that targets all three pods in a test replica set STEP: Waiting for the pdb to be processed STEP: First trying to evict a pod which shouldn't be evictable STEP: Waiting for all pods to be running Mar 2 00:19:37.612: INFO: pods: 0 < 3 Mar 2 00:19:39.633: INFO: running pods: 0 < 3 Mar 2 00:19:41.640: INFO: running pods: 0 < 3 Mar 2 00:19:43.618: INFO: running pods: 0 < 3 Mar 2 00:19:45.619: INFO: running pods: 0 < 3 Mar 2 00:19:47.624: INFO: running pods: 0 < 3 Mar 2 00:19:49.616: INFO: running pods: 0 < 3 Mar 2 00:19:51.636: INFO: running pods: 1 < 3 Mar 2 00:19:53.617: INFO: running pods: 1 < 3 Mar 2 00:19:55.619: INFO: running pods: 1 < 3 Mar 2 00:19:57.617: INFO: running pods: 2 < 3 Mar 2 00:19:59.619: INFO: running pods: 2 < 3 Mar 2 00:20:01.621: INFO: running pods: 2 < 3 Mar 2 00:20:03.616: INFO: running pods: 2 < 3 Mar 2 00:20:05.617: INFO: running pods: 2 < 3 Mar 2 00:20:07.616: INFO: running pods: 2 < 3 STEP: locating a running pod STEP: Updating the pdb to allow a pod to be evicted STEP: Waiting for the pdb to be processed STEP: Trying to evict the same pod we tried earlier which should now be evictable STEP: Waiting for all pods to be running STEP: Waiting for the pdb to observed all healthy pods STEP: Patching the pdb to disallow a pod to be evicted STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Mar 2 00:20:13.703: INFO: running pods: 2 < 3 Mar 2 00:20:15.709: INFO: running pods: 2 < 3 Mar 2 00:20:17.709: INFO: running pods: 2 < 3 STEP: locating a running pod STEP: Deleting the pdb to allow a pod to be evicted STEP: Waiting for the pdb to be deleted STEP: Trying to evict the same pod we tried earlier which should now be evictable STEP: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:19.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8925" for this suite. • [SLOW TEST:44.262 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should block an eviction until the PDB is updated to allow it [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":10,"skipped":201,"failed":0} Mar 2 00:20:19.771: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:19.771: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:19.771: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:19.771: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:19.771: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:19.771: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:19.771: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:19.771: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:07.347: INFO: >>> kubeConfig: /tmp/kubeconfig-178066528 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating service in namespace services-528 Mar 2 00:18:07.408: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:09.412: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:11.412: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:13.413: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:15.413: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:17.411: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:19.412: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:21.412: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:23.413: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:25.415: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:27.413: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:29.412: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:31.413: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:33.413: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:35.412: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:37.413: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:39.413: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:41.412: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:43.412: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:45.413: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:18:47.414: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Mar 2 00:18:47.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-178066528 --namespace=services-528 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Mar 2 00:18:47.551: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Mar 2 00:18:47.551: INFO: stdout: "iptables" Mar 2 00:18:47.551: INFO: proxyMode: iptables Mar 2 00:18:47.592: INFO: Waiting for pod kube-proxy-mode-detector to disappear Mar 2 00:18:47.617: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-528 STEP: creating replication controller affinity-nodeport-timeout in namespace services-528 I0302 00:18:47.706492 212 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-528, replica count: 3 I0302 00:18:50.758292 212 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:53.758683 212 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:56.758906 212 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:18:59.759153 212 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:02.759655 212 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:05.761677 212 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:08.761985 212 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:11.762904 212 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:14.763914 212 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:19:14.774: INFO: Creating new exec pod Mar 2 00:19:43.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-178066528 --namespace=services-528 exec execpod-affinityk56rd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Mar 2 00:19:43.914: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Mar 2 00:19:43.914: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:19:43.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-178066528 --namespace=services-528 exec execpod-affinityk56rd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.74.47 80' Mar 2 00:19:43.990: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.74.47 80\nConnection to 10.43.74.47 80 port [tcp/http] succeeded!\n" Mar 2 00:19:43.990: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:19:43.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-178066528 --namespace=services-528 exec execpod-affinityk56rd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.12 30174' Mar 2 00:19:44.090: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.12 30174\nConnection to 172.17.0.12 30174 port [tcp/*] succeeded!\n" Mar 2 00:19:44.090: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:19:44.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-178066528 --namespace=services-528 exec execpod-affinityk56rd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.8 30174' Mar 2 00:19:44.164: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.8 30174\nConnection to 172.17.0.8 30174 port [tcp/*] succeeded!\n" Mar 2 00:19:44.164: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:19:44.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-178066528 --namespace=services-528 exec execpod-affinityk56rd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.12:30174/ ; done' Mar 2 00:19:44.349: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n" Mar 2 00:19:44.349: INFO: stdout: "\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4\naffinity-nodeport-timeout-792k4" Mar 2 00:19:44.349: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.349: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.349: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.349: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Received response from host: affinity-nodeport-timeout-792k4 Mar 2 00:19:44.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-178066528 --namespace=services-528 exec execpod-affinityk56rd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.12:30174/' Mar 2 00:19:44.441: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n" Mar 2 00:19:44.441: INFO: stdout: "affinity-nodeport-timeout-792k4" Mar 2 00:20:04.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-178066528 --namespace=services-528 exec execpod-affinityk56rd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.12:30174/' Mar 2 00:20:04.576: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.17.0.12:30174/\n" Mar 2 00:20:04.576: INFO: stdout: "affinity-nodeport-timeout-f95nc" Mar 2 00:20:04.576: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-528, will wait for the garbage collector to delete the pods Mar 2 00:20:04.652: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.402144ms Mar 2 00:20:04.752: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.697329ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:20.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-528" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:133.391 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":116,"failed":0} Mar 2 00:20:20.739: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:20.739: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:20.739: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:20.739: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:20.739: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:20.739: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:20.739: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:20.739: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:20:02.798: INFO: >>> kubeConfig: /tmp/kubeconfig-1948352521 STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 2 00:20:02.820: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8837 ed228a0d-c1b1-475a-a611-cd9362965ebb 15608 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:20:02.820: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8837 ed228a0d-c1b1-475a-a611-cd9362965ebb 15608 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 2 00:20:02.837: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8837 ed228a0d-c1b1-475a-a611-cd9362965ebb 15615 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:20:02.837: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8837 ed228a0d-c1b1-475a-a611-cd9362965ebb 15615 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 2 00:20:02.844: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8837 ed228a0d-c1b1-475a-a611-cd9362965ebb 15617 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:20:02.844: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8837 ed228a0d-c1b1-475a-a611-cd9362965ebb 15617 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 2 00:20:02.850: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8837 ed228a0d-c1b1-475a-a611-cd9362965ebb 15618 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:20:02.850: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8837 ed228a0d-c1b1-475a-a611-cd9362965ebb 15618 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 2 00:20:02.853: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8837 f6d2685a-1b73-4f97-b58e-c9506e7c56e5 15619 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:20:02.853: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8837 f6d2685a-1b73-4f97-b58e-c9506e7c56e5 15619 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 2 00:20:12.863: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8837 f6d2685a-1b73-4f97-b58e-c9506e7c56e5 15995 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 2 00:20:12.864: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8837 f6d2685a-1b73-4f97-b58e-c9506e7c56e5 15995 0 2023-03-02 00:20:02 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:22.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8837" for this suite. • [SLOW TEST:20.081 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":9,"skipped":106,"failed":0} Mar 2 00:20:22.879: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:22.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:22.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:22.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:22.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:22.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:22.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:22.880: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:32.219: INFO: >>> kubeConfig: /tmp/kubeconfig-578814310 STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Mar 2 00:19:32.259: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 2 00:19:32.259: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 2 00:19:32.273: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 2 00:19:32.273: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 2 00:19:32.291: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 2 00:19:32.291: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 2 00:19:32.341: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 2 00:19:32.341: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 2 00:19:44.296: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 and labels map[test-deployment-static:true] Mar 2 00:19:44.297: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 and labels map[test-deployment-static:true] Mar 2 00:19:59.592: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Mar 2 00:19:59.598: INFO: observed event type ADDED STEP: waiting for Replicas to scale Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 0 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:19:59.599: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:19:59.609: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:19:59.609: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:19:59.635: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:19:59.635: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:19:59.661: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:19:59.661: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:19:59.673: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:19:59.673: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:20:01.935: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:20:01.935: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:20:01.997: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 STEP: listing Deployments Mar 2 00:20:02.006: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Mar 2 00:20:02.016: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Mar 2 00:20:02.023: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:02.039: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:02.082: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:02.121: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:02.152: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:02.173: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:16.398: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:16.806: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:16.862: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:16.891: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Mar 2 00:20:23.320: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Mar 2 00:20:23.381: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:20:23.381: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:20:23.381: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:20:23.381: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:20:23.381: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:20:23.381: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 1 Mar 2 00:20:23.381: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:20:23.381: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 3 Mar 2 00:20:23.381: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:20:23.382: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 2 Mar 2 00:20:23.382: INFO: observed Deployment test-deployment in namespace deployment-1339 with ReadyReplicas 3 STEP: deleting the Deployment Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED Mar 2 00:20:23.396: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Mar 2 00:20:23.408: INFO: Log out all the ReplicaSets if there is no deployment created Mar 2 00:20:23.413: INFO: ReplicaSet "test-deployment-6d7ffcf7fb": &ReplicaSet{ObjectMeta:{test-deployment-6d7ffcf7fb deployment-1339 f757712a-83d3-47c6-91cf-00ce552f1adb 15539 3 2023-03-02 00:19:32 +0000 UTC map[pod-template-hash:6d7ffcf7fb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 4e1d8ec9-afb3-4d0a-a9a9-c83464d26698 0xc00457a187 0xc00457a188}] [] [{k3s Update apps/v1 2023-03-02 00:19:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1d8ec9-afb3-4d0a-a9a9-c83464d26698\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:20:01 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 6d7ffcf7fb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:6d7ffcf7fb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00457a220 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:20:23.421: INFO: ReplicaSet "test-deployment-854fdc678": &ReplicaSet{ObjectMeta:{test-deployment-854fdc678 deployment-1339 1e1c9af3-f109-46cd-9049-55b2b1f65422 16407 2 2023-03-02 00:20:02 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 4e1d8ec9-afb3-4d0a-a9a9-c83464d26698 0xc00457a287 0xc00457a288}] [] [{k3s Update apps/v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1d8ec9-afb3-4d0a-a9a9-c83464d26698\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:20:16 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 854fdc678,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00457a320 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:20:23.427: INFO: pod: "test-deployment-854fdc678-8hwfj": &Pod{ObjectMeta:{test-deployment-854fdc678-8hwfj test-deployment-854fdc678- deployment-1339 54c7ab7e-a383-4746-8290-50b27972169f 16176 0 2023-03-02 00:20:02 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 1e1c9af3-f109-46cd-9049-55b2b1f65422 0xc0046445e7 0xc0046445e8}] [] [{k3s Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c9af3-f109-46cd-9049-55b2b1f65422\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:20:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.1.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4r2ts,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4r2ts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.42.1.31,StartTime:2023-03-02 00:20:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:20:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://7fcc75e8cf29dfb3d776de7c126e94b5e7143315dbbab1b33579df06e8c57149,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.1.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:20:23.427: INFO: pod: "test-deployment-854fdc678-7jt9w": &Pod{ObjectMeta:{test-deployment-854fdc678-7jt9w test-deployment-854fdc678- deployment-1339 c9576f3d-dced-4e47-8f99-e821e83ceeea 16406 0 2023-03-02 00:20:16 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 1e1c9af3-f109-46cd-9049-55b2b1f65422 0xc004644a87 0xc004644a88}] [] [{k3s Update v1 2023-03-02 00:20:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e1c9af3-f109-46cd-9049-55b2b1f65422\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:20:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.59\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6nfzb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6nfzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.59,StartTime:2023-03-02 00:20:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:20:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0168cb5a2b3d18dd407d6b2b278d9e6330c92af3fae0aebddb8b03c5a4e04aee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:20:23.427: INFO: ReplicaSet "test-deployment-5ddd8b47d8": &ReplicaSet{ObjectMeta:{test-deployment-5ddd8b47d8 deployment-1339 3c8fc599-3ea3-4107-8700-2f5348451839 16414 4 2023-03-02 00:19:59 +0000 UTC map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 4e1d8ec9-afb3-4d0a-a9a9-c83464d26698 0xc00457a387 0xc00457a388}] [] [{k3s Update apps/v1 2023-03-02 00:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1d8ec9-afb3-4d0a-a9a9-c83464d26698\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {k3s Update apps/v1 2023-03-02 00:20:23 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 5ddd8b47d8,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.6 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00457a420 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 2 00:20:23.432: INFO: pod: "test-deployment-5ddd8b47d8-fglg4": &Pod{ObjectMeta:{test-deployment-5ddd8b47d8-fglg4 test-deployment-5ddd8b47d8- deployment-1339 5edc6249-e885-4855-9dc2-87e5af7cdbe9 16181 0 2023-03-02 00:20:02 +0000 UTC 2023-03-02 00:20:17 +0000 UTC 0xc001c1df18 map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-5ddd8b47d8 3c8fc599-3ea3-4107-8700-2f5348451839 0xc001c1df47 0xc001c1df48}] [] [{k3s Update v1 2023-03-02 00:20:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c8fc599-3ea3-4107-8700-2f5348451839\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:20:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.1.30\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t4mt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.6,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4mt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-agent-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.42.1.30,StartTime:2023-03-02 00:20:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:20:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/rancher/mirrored-pause:3.6,ImageID:docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893,ContainerID:containerd://5231cce1282a055ad5e10434de317c31cf479007f3a294c0404f12baf79ba030,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.1.30,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 2 00:20:23.433: INFO: pod: "test-deployment-5ddd8b47d8-dqgtl": &Pod{ObjectMeta:{test-deployment-5ddd8b47d8-dqgtl test-deployment-5ddd8b47d8- deployment-1339 7665aa5b-8a60-4173-8cff-d986abf8f048 16411 0 2023-03-02 00:19:59 +0000 UTC 2023-03-02 00:20:24 +0000 UTC 0xc00481a120 map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-5ddd8b47d8 3c8fc599-3ea3-4107-8700-2f5348451839 0xc00481a157 0xc00481a158}] [] [{k3s Update v1 2023-03-02 00:19:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c8fc599-3ea3-4107-8700-2f5348451839\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {k3s Update v1 2023-03-02 00:20:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.42.0.54\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j8z9f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.6,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8z9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k3s-server-1-mid5l7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:19:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:20:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-03-02 00:19:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.42.0.54,StartTime:2023-03-02 00:19:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-03-02 00:20:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/rancher/mirrored-pause:3.6,ImageID:docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893,ContainerID:containerd://12fc4a3e571ac720f9c0eda78c964407c97dbd54aeab52d138b0de2233eea3be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.42.0.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:23.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1339" for this suite. • [SLOW TEST:51.226 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:35.441: INFO: >>> kubeConfig: /tmp/kubeconfig-2840419169 STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3491.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3491.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3491.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3491.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 2 00:20:11.521: INFO: DNS probes using dns-test-0e09c220-b26f-4b25-9292-bd67ffdea699 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3491.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3491.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3491.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3491.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 2 00:20:17.576: INFO: DNS probes using dns-test-bfe59dc5-0d9d-4a03-8165-39b57ea9de58 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3491.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3491.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3491.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3491.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 2 00:20:23.632: INFO: DNS probes using dns-test-23da69ff-a545-462e-b449-2a3e923f2c04 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:23.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3491" for this suite. • [SLOW TEST:48.239 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":8,"skipped":333,"failed":0} Mar 2 00:20:23.681: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:23.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:23.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:23.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:23.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:23.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:23.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:23.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":12,"skipped":190,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:57.298: INFO: >>> kubeConfig: /tmp/kubeconfig-3274437701 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating service multi-endpoint-test in namespace services-3430 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3430 to expose endpoints map[] Mar 2 00:19:57.336: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Mar 2 00:19:58.343: INFO: successfully validated that service multi-endpoint-test in namespace services-3430 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3430 Mar 2 00:19:58.358: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:00.363: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:02.365: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3430 to expose endpoints map[pod1:[100]] Mar 2 00:20:02.380: INFO: successfully validated that service multi-endpoint-test in namespace services-3430 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-3430 Mar 2 00:20:02.392: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:04.397: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:06.399: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:08.397: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:10.397: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 2 00:20:12.397: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3430 to expose endpoints map[pod1:[100] pod2:[101]] Mar 2 00:20:12.416: INFO: successfully validated that service multi-endpoint-test in namespace services-3430 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Checking if the Service forwards traffic to pods Mar 2 00:20:12.416: INFO: Creating new exec pod Mar 2 00:20:23.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3274437701 --namespace=services-3430 exec execpod5m6hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Mar 2 00:20:23.532: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Mar 2 00:20:23.532: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:20:23.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3274437701 --namespace=services-3430 exec execpod5m6hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.242.206 80' Mar 2 00:20:23.612: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.242.206 80\nConnection to 10.43.242.206 80 port [tcp/http] succeeded!\n" Mar 2 00:20:23.612: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:20:23.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3274437701 --namespace=services-3430 exec execpod5m6hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Mar 2 00:20:23.744: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Mar 2 00:20:23.744: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:20:23.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3274437701 --namespace=services-3430 exec execpod5m6hw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.242.206 81' Mar 2 00:20:23.816: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.242.206 81\nConnection to 10.43.242.206 81 port [tcp/*] succeeded!\n" Mar 2 00:20:23.816: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" STEP: Deleting pod pod1 in namespace services-3430 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3430 to expose endpoints map[pod2:[101]] Mar 2 00:20:24.870: INFO: successfully validated that service multi-endpoint-test in namespace services-3430 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-3430 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3430 to expose endpoints map[] Mar 2 00:20:24.910: INFO: successfully validated that service multi-endpoint-test in namespace services-3430 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:24.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3430" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:27.663 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":13,"skipped":190,"failed":0} Mar 2 00:20:24.962: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:24.962: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:24.962: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:24.962: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:24.962: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:24.962: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:24.962: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:24.962: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:02.192: INFO: >>> kubeConfig: /tmp/kubeconfig-2095741199 STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:32.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6245" for this suite. • [SLOW TEST:150.050 seconds] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":4,"skipped":60,"failed":0} Mar 2 00:20:32.243: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:32.243: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:32.243: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:32.243: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:32.243: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:32.243: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:32.243: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:32.243: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:01.852: INFO: >>> kubeConfig: /tmp/kubeconfig-2820862176 STEP: Building a namespace api object, basename statefulset W0302 00:15:02.490295 441 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Mar 2 00:15:02.490: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 STEP: Creating service test in namespace statefulset-7827 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a new StatefulSet Mar 2 00:15:02.565: INFO: Found 0 stateful pods, waiting for 3 Mar 2 00:15:12.567: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:15:22.567: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:15:32.570: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:15:42.571: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:15:52.567: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:16:02.570: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:16:12.569: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:16:22.568: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:16:32.568: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:16:42.568: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:16:52.568: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:16:52.568: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:16:52.568: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:17:02.572: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:17:02.572: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:17:02.572: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:17:12.570: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:17:12.570: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:17:12.570: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Mar 2 00:17:12.595: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 2 00:17:22.623: INFO: Updating stateful set ss2 Mar 2 00:17:22.628: INFO: Waiting for Pod statefulset-7827/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:17:32.635: INFO: Waiting for Pod statefulset-7827/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:17:42.637: INFO: Waiting for Pod statefulset-7827/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:17:52.635: INFO: Waiting for Pod statefulset-7827/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:18:02.638: INFO: Waiting for Pod statefulset-7827/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb STEP: Restoring Pods to the correct revision when they are deleted Mar 2 00:18:12.720: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:18:22.726: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:18:32.725: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:18:42.728: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:18:52.726: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:18:52.726: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:18:52.726: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:19:02.725: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:02.725: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:02.725: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 2 00:19:02.749: INFO: Updating stateful set ss2 Mar 2 00:19:02.755: INFO: Waiting for Pod statefulset-7827/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:19:12.763: INFO: Waiting for Pod statefulset-7827/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:19:22.792: INFO: Updating stateful set ss2 Mar 2 00:19:22.800: INFO: Waiting for StatefulSet statefulset-7827/ss2 to complete update Mar 2 00:19:22.800: INFO: Waiting for Pod statefulset-7827/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:19:32.827: INFO: Waiting for StatefulSet statefulset-7827/ss2 to complete update Mar 2 00:19:32.827: INFO: Waiting for Pod statefulset-7827/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:19:42.846: INFO: Waiting for StatefulSet statefulset-7827/ss2 to complete update Mar 2 00:19:42.846: INFO: Waiting for Pod statefulset-7827/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:19:52.808: INFO: Waiting for StatefulSet statefulset-7827/ss2 to complete update Mar 2 00:19:52.808: INFO: Waiting for Pod statefulset-7827/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Mar 2 00:20:02.811: INFO: Waiting for StatefulSet statefulset-7827/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Mar 2 00:20:12.810: INFO: Deleting all statefulset in ns statefulset-7827 Mar 2 00:20:12.814: INFO: Scaling statefulset ss2 to 0 Mar 2 00:20:32.839: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:20:32.843: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:32.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7827" for this suite. • [SLOW TEST:331.053 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} Mar 2 00:20:32.905: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:32.905: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:32.905: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:32.905: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:32.905: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:32.905: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:32.905: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:32.905: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:18.467: INFO: >>> kubeConfig: /tmp/kubeconfig-60432830 STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating a replication controller Mar 2 00:19:18.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 create -f -' Mar 2 00:19:18.982: INFO: stderr: "" Mar 2 00:19:18.982: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 2 00:19:18.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:19.039: INFO: stderr: "" Mar 2 00:19:19.039: INFO: stdout: "update-demo-nautilus-2wpmq update-demo-nautilus-7zd6l " Mar 2 00:19:19.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-2wpmq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:19.092: INFO: stderr: "" Mar 2 00:19:19.092: INFO: stdout: "" Mar 2 00:19:19.092: INFO: update-demo-nautilus-2wpmq is created but not running Mar 2 00:19:24.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:24.137: INFO: stderr: "" Mar 2 00:19:24.137: INFO: stdout: "update-demo-nautilus-2wpmq update-demo-nautilus-7zd6l " Mar 2 00:19:24.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-2wpmq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:24.183: INFO: stderr: "" Mar 2 00:19:24.183: INFO: stdout: "" Mar 2 00:19:24.183: INFO: update-demo-nautilus-2wpmq is created but not running Mar 2 00:19:29.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:29.248: INFO: stderr: "" Mar 2 00:19:29.248: INFO: stdout: "update-demo-nautilus-2wpmq update-demo-nautilus-7zd6l " Mar 2 00:19:29.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-2wpmq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:29.311: INFO: stderr: "" Mar 2 00:19:29.311: INFO: stdout: "" Mar 2 00:19:29.311: INFO: update-demo-nautilus-2wpmq is created but not running Mar 2 00:19:34.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:34.363: INFO: stderr: "" Mar 2 00:19:34.363: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-2wpmq " Mar 2 00:19:34.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:34.420: INFO: stderr: "" Mar 2 00:19:34.420: INFO: stdout: "" Mar 2 00:19:34.420: INFO: update-demo-nautilus-7zd6l is created but not running Mar 2 00:19:39.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:39.472: INFO: stderr: "" Mar 2 00:19:39.472: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-2wpmq " Mar 2 00:19:39.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:39.539: INFO: stderr: "" Mar 2 00:19:39.539: INFO: stdout: "" Mar 2 00:19:39.539: INFO: update-demo-nautilus-7zd6l is created but not running Mar 2 00:19:44.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:44.586: INFO: stderr: "" Mar 2 00:19:44.586: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-2wpmq " Mar 2 00:19:44.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:44.629: INFO: stderr: "" Mar 2 00:19:44.629: INFO: stdout: "" Mar 2 00:19:44.629: INFO: update-demo-nautilus-7zd6l is created but not running Mar 2 00:19:49.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:49.673: INFO: stderr: "" Mar 2 00:19:49.673: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-2wpmq " Mar 2 00:19:49.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:49.719: INFO: stderr: "" Mar 2 00:19:49.719: INFO: stdout: "" Mar 2 00:19:49.719: INFO: update-demo-nautilus-7zd6l is created but not running Mar 2 00:19:54.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:54.764: INFO: stderr: "" Mar 2 00:19:54.764: INFO: stdout: "update-demo-nautilus-2wpmq update-demo-nautilus-7zd6l " Mar 2 00:19:54.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-2wpmq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:54.809: INFO: stderr: "" Mar 2 00:19:54.809: INFO: stdout: "true" Mar 2 00:19:54.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-2wpmq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 2 00:19:54.852: INFO: stderr: "" Mar 2 00:19:54.852: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Mar 2 00:19:54.852: INFO: validating pod update-demo-nautilus-2wpmq Mar 2 00:19:54.858: INFO: got data: { "image": "nautilus.jpg" } Mar 2 00:19:54.858: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 2 00:19:54.858: INFO: update-demo-nautilus-2wpmq is verified up and running Mar 2 00:19:54.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:19:54.900: INFO: stderr: "" Mar 2 00:19:54.900: INFO: stdout: "true" Mar 2 00:19:54.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 2 00:19:54.944: INFO: stderr: "" Mar 2 00:19:54.944: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Mar 2 00:19:54.944: INFO: validating pod update-demo-nautilus-7zd6l Mar 2 00:19:54.950: INFO: got data: { "image": "nautilus.jpg" } Mar 2 00:19:54.950: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 2 00:19:54.950: INFO: update-demo-nautilus-7zd6l is verified up and running STEP: scaling down the replication controller Mar 2 00:19:54.951: INFO: scanned /root for discovery docs: Mar 2 00:19:54.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Mar 2 00:19:56.014: INFO: stderr: "" Mar 2 00:19:56.014: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 2 00:19:56.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:19:56.060: INFO: stderr: "" Mar 2 00:19:56.060: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-2wpmq " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 2 00:20:01.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:20:01.106: INFO: stderr: "" Mar 2 00:20:01.106: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-2wpmq " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 2 00:20:06.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:20:06.153: INFO: stderr: "" Mar 2 00:20:06.153: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-2wpmq " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 2 00:20:11.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:20:11.200: INFO: stderr: "" Mar 2 00:20:11.200: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-2wpmq " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 2 00:20:16.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:20:16.247: INFO: stderr: "" Mar 2 00:20:16.247: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-2wpmq " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 2 00:20:21.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:20:21.294: INFO: stderr: "" Mar 2 00:20:21.294: INFO: stdout: "update-demo-nautilus-7zd6l " Mar 2 00:20:21.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:20:21.337: INFO: stderr: "" Mar 2 00:20:21.337: INFO: stdout: "true" Mar 2 00:20:21.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 2 00:20:21.382: INFO: stderr: "" Mar 2 00:20:21.382: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Mar 2 00:20:21.382: INFO: validating pod update-demo-nautilus-7zd6l Mar 2 00:20:21.395: INFO: got data: { "image": "nautilus.jpg" } Mar 2 00:20:21.395: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 2 00:20:21.395: INFO: update-demo-nautilus-7zd6l is verified up and running STEP: scaling up the replication controller Mar 2 00:20:21.397: INFO: scanned /root for discovery docs: Mar 2 00:20:21.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Mar 2 00:20:22.460: INFO: stderr: "" Mar 2 00:20:22.460: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 2 00:20:22.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:20:22.506: INFO: stderr: "" Mar 2 00:20:22.506: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-srvl8 " Mar 2 00:20:22.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:20:22.550: INFO: stderr: "" Mar 2 00:20:22.550: INFO: stdout: "true" Mar 2 00:20:22.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 2 00:20:22.594: INFO: stderr: "" Mar 2 00:20:22.594: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Mar 2 00:20:22.594: INFO: validating pod update-demo-nautilus-7zd6l Mar 2 00:20:22.597: INFO: got data: { "image": "nautilus.jpg" } Mar 2 00:20:22.597: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 2 00:20:22.597: INFO: update-demo-nautilus-7zd6l is verified up and running Mar 2 00:20:22.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-srvl8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:20:22.640: INFO: stderr: "" Mar 2 00:20:22.640: INFO: stdout: "" Mar 2 00:20:22.640: INFO: update-demo-nautilus-srvl8 is created but not running Mar 2 00:20:27.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:20:27.686: INFO: stderr: "" Mar 2 00:20:27.686: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-srvl8 " Mar 2 00:20:27.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:20:27.729: INFO: stderr: "" Mar 2 00:20:27.729: INFO: stdout: "true" Mar 2 00:20:27.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 2 00:20:27.771: INFO: stderr: "" Mar 2 00:20:27.771: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Mar 2 00:20:27.771: INFO: validating pod update-demo-nautilus-7zd6l Mar 2 00:20:27.774: INFO: got data: { "image": "nautilus.jpg" } Mar 2 00:20:27.774: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 2 00:20:27.774: INFO: update-demo-nautilus-7zd6l is verified up and running Mar 2 00:20:27.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-srvl8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:20:27.819: INFO: stderr: "" Mar 2 00:20:27.819: INFO: stdout: "" Mar 2 00:20:27.819: INFO: update-demo-nautilus-srvl8 is created but not running Mar 2 00:20:32.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 2 00:20:32.876: INFO: stderr: "" Mar 2 00:20:32.876: INFO: stdout: "update-demo-nautilus-7zd6l update-demo-nautilus-srvl8 " Mar 2 00:20:32.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:20:32.921: INFO: stderr: "" Mar 2 00:20:32.921: INFO: stdout: "true" Mar 2 00:20:32.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-7zd6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 2 00:20:32.963: INFO: stderr: "" Mar 2 00:20:32.963: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Mar 2 00:20:32.963: INFO: validating pod update-demo-nautilus-7zd6l Mar 2 00:20:32.967: INFO: got data: { "image": "nautilus.jpg" } Mar 2 00:20:32.967: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 2 00:20:32.967: INFO: update-demo-nautilus-7zd6l is verified up and running Mar 2 00:20:32.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-srvl8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 2 00:20:33.050: INFO: stderr: "" Mar 2 00:20:33.050: INFO: stdout: "true" Mar 2 00:20:33.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods update-demo-nautilus-srvl8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 2 00:20:33.093: INFO: stderr: "" Mar 2 00:20:33.093: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Mar 2 00:20:33.093: INFO: validating pod update-demo-nautilus-srvl8 Mar 2 00:20:33.100: INFO: got data: { "image": "nautilus.jpg" } Mar 2 00:20:33.100: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 2 00:20:33.100: INFO: update-demo-nautilus-srvl8 is verified up and running STEP: using delete to clean up resources Mar 2 00:20:33.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 delete --grace-period=0 --force -f -' Mar 2 00:20:33.153: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 2 00:20:33.153: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 2 00:20:33.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get rc,svc -l name=update-demo --no-headers' Mar 2 00:20:33.212: INFO: stderr: "No resources found in kubectl-8030 namespace.\n" Mar 2 00:20:33.212: INFO: stdout: "" Mar 2 00:20:33.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-60432830 --namespace=kubectl-8030 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 2 00:20:33.258: INFO: stderr: "" Mar 2 00:20:33.258: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:33.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8030" for this suite. • [SLOW TEST:74.801 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":9,"skipped":98,"failed":0} Mar 2 00:20:33.269: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:33.269: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:33.269: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:33.269: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:33.269: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:33.269: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:33.269: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:33.269: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:35.508: INFO: >>> kubeConfig: /tmp/kubeconfig-3747375592 STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Mar 2 00:20:25.678: INFO: EndpointSlice for Service endpointslice-3299/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:35.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-3299" for this suite. • [SLOW TEST:60.188 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":6,"skipped":166,"failed":0} Mar 2 00:20:35.697: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:35.697: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:35.697: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:35.697: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:35.697: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:35.697: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:35.697: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:35.697: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":143,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:56.155: INFO: >>> kubeConfig: /tmp/kubeconfig-1263463171 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating service in namespace services-2006 STEP: creating service affinity-clusterip-transition in namespace services-2006 STEP: creating replication controller affinity-clusterip-transition in namespace services-2006 I0302 00:18:56.192098 214 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-2006, replica count: 3 I0302 00:18:59.243313 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:02.243769 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:05.244849 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:08.246265 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:11.248546 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:14.249261 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:17.249457 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:20.249844 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:23.250904 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:26.251772 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:29.253864 214 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:19:29.278: INFO: Creating new exec pod Mar 2 00:20:02.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1263463171 --namespace=services-2006 exec execpod-affinity68xt8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Mar 2 00:20:02.468: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Mar 2 00:20:02.468: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:20:02.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1263463171 --namespace=services-2006 exec execpod-affinity68xt8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.197.3 80' Mar 2 00:20:02.546: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.197.3 80\nConnection to 10.43.197.3 80 port [tcp/http] succeeded!\n" Mar 2 00:20:02.546: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:20:02.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1263463171 --namespace=services-2006 exec execpod-affinity68xt8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.43.197.3:80/ ; done' Mar 2 00:20:02.711: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n" Mar 2 00:20:02.711: INFO: stdout: "\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk" Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:02.711: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1263463171 --namespace=services-2006 exec execpod-affinity68xt8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.43.197.3:80/ ; done' Mar 2 00:20:32.849: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n" Mar 2 00:20:32.849: INFO: stdout: "\naffinity-clusterip-transition-j89c4\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-j89c4\naffinity-clusterip-transition-b28wd\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-j89c4\naffinity-clusterip-transition-j89c4\naffinity-clusterip-transition-j89c4\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-j89c4" Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-j89c4 Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-j89c4 Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-b28wd Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-j89c4 Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-j89c4 Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-j89c4 Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:32.849: INFO: Received response from host: affinity-clusterip-transition-j89c4 Mar 2 00:20:32.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1263463171 --namespace=services-2006 exec execpod-affinity68xt8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.43.197.3:80/ ; done' Mar 2 00:20:33.068: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.43.197.3:80/\n" Mar 2 00:20:33.068: INFO: stdout: "\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk\naffinity-clusterip-transition-q25dk" Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Received response from host: affinity-clusterip-transition-q25dk Mar 2 00:20:33.068: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2006, will wait for the garbage collector to delete the pods Mar 2 00:20:33.148: INFO: Deleting ReplicationController affinity-clusterip-transition took: 11.509781ms Mar 2 00:20:33.249: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.79504ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:37.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2006" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:101.148 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":143,"failed":0} Mar 2 00:20:37.305: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:37.305: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:37.305: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:37.305: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:37.305: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:37.305: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:37.305: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:37.305: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:50.861: INFO: >>> kubeConfig: /tmp/kubeconfig-200441725 STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 STEP: Creating service test in namespace statefulset-326 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-326 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-326 Mar 2 00:16:50.896: INFO: Found 0 stateful pods, waiting for 1 Mar 2 00:17:00.899: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:17:10.900: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:17:20.900: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 2 00:17:20.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-200441725 --namespace=statefulset-326 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:17:21.025: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:17:21.025: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:17:21.025: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 2 00:17:21.029: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 2 00:17:31.032: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 2 00:17:41.035: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 2 00:17:51.035: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 2 00:18:01.036: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 2 00:18:11.035: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 2 00:18:21.035: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 2 00:18:21.035: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:18:21.050: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999987s Mar 2 00:18:22.055: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995974121s Mar 2 00:18:23.059: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991639254s Mar 2 00:18:24.063: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98667475s Mar 2 00:18:25.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983053213s Mar 2 00:18:26.069: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.98023027s Mar 2 00:18:27.072: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.97734696s Mar 2 00:18:28.077: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.973601992s Mar 2 00:18:29.081: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.969281029s Mar 2 00:18:30.084: INFO: Verifying statefulset ss doesn't scale past 1 for another 965.513555ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-326 Mar 2 00:18:31.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-200441725 --namespace=statefulset-326 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:18:31.175: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 2 00:18:31.175: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 2 00:18:31.175: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 2 00:18:31.179: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:18:41.185: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:18:51.185: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:19:01.183: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:19:11.183: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:19:21.185: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:19:31.193: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:31.193: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:31.193: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:19:41.200: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:41.200: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:41.200: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:19:51.185: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:51.185: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:51.185: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:20:01.184: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:20:01.184: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:20:01.184: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 2 00:20:01.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-200441725 --namespace=statefulset-326 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:20:01.319: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:20:01.319: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:20:01.319: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 2 00:20:01.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-200441725 --namespace=statefulset-326 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:20:01.419: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:20:01.419: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:20:01.419: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 2 00:20:01.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-200441725 --namespace=statefulset-326 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:20:01.502: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:20:01.502: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:20:01.502: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 2 00:20:01.502: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:20:01.507: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 2 00:20:11.514: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 2 00:20:21.522: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 2 00:20:21.522: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 2 00:20:21.522: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 2 00:20:21.534: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999987s Mar 2 00:20:22.539: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995931484s Mar 2 00:20:23.544: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990950163s Mar 2 00:20:24.550: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985250126s Mar 2 00:20:25.555: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980011563s Mar 2 00:20:26.559: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975408655s Mar 2 00:20:27.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971617847s Mar 2 00:20:28.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.967365863s Mar 2 00:20:29.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.962023536s Mar 2 00:20:30.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 957.080056ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-326 Mar 2 00:20:31.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-200441725 --namespace=statefulset-326 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:31.675: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 2 00:20:31.675: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 2 00:20:31.675: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 2 00:20:31.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-200441725 --namespace=statefulset-326 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:31.795: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 2 00:20:31.795: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 2 00:20:31.795: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 2 00:20:31.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-200441725 --namespace=statefulset-326 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:31.866: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 2 00:20:31.866: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 2 00:20:31.866: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 2 00:20:31.866: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Mar 2 00:20:41.886: INFO: Deleting all statefulset in ns statefulset-326 Mar 2 00:20:41.889: INFO: Scaling statefulset ss to 0 Mar 2 00:20:41.903: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:20:41.906: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:41.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-326" for this suite. • [SLOW TEST:231.074 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0} Mar 2 00:20:41.936: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:41.936: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:41.936: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:41.936: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:41.936: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:41.936: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:41.936: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:41.936: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:36.107: INFO: >>> kubeConfig: /tmp/kubeconfig-4073236307 STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating service in namespace services-9572 STEP: creating service affinity-nodeport-transition in namespace services-9572 STEP: creating replication controller affinity-nodeport-transition in namespace services-9572 I0302 00:19:36.186410 205 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-9572, replica count: 3 I0302 00:19:39.236725 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:42.237846 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:45.239043 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:48.239162 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:51.239657 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:54.240733 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:19:57.241837 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:20:00.241963 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:20:03.242939 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0302 00:20:06.243305 205 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 2 00:20:06.255: INFO: Creating new exec pod Mar 2 00:20:13.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4073236307 --namespace=services-9572 exec execpod-affinityk768f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Mar 2 00:20:13.392: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Mar 2 00:20:13.392: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:20:13.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4073236307 --namespace=services-9572 exec execpod-affinityk768f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.43.36.165 80' Mar 2 00:20:13.468: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.43.36.165 80\nConnection to 10.43.36.165 80 port [tcp/http] succeeded!\n" Mar 2 00:20:13.468: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:20:13.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4073236307 --namespace=services-9572 exec execpod-affinityk768f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.12 32315' Mar 2 00:20:13.546: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.12 32315\nConnection to 172.17.0.12 32315 port [tcp/*] succeeded!\n" Mar 2 00:20:13.546: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:20:13.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4073236307 --namespace=services-9572 exec execpod-affinityk768f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.17.0.8 32315' Mar 2 00:20:13.669: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.17.0.8 32315\nConnection to 172.17.0.8 32315 port [tcp/*] succeeded!\n" Mar 2 00:20:13.669: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Mar 2 00:20:13.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4073236307 --namespace=services-9572 exec execpod-affinityk768f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.12:32315/ ; done' Mar 2 00:20:13.801: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n" Mar 2 00:20:13.801: INFO: stdout: "\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-bj6qm" Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:13.801: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:43.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4073236307 --namespace=services-9572 exec execpod-affinityk768f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.12:32315/ ; done' Mar 2 00:20:43.955: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n" Mar 2 00:20:43.955: INFO: stdout: "\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-jf6hp\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-jf6hp\naffinity-nodeport-transition-jf6hp\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-jf6hp\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-jf6hp\naffinity-nodeport-transition-bj6qm\naffinity-nodeport-transition-jf6hp\naffinity-nodeport-transition-jf6hp" Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-jf6hp Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-jf6hp Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-jf6hp Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-jf6hp Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-jf6hp Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-bj6qm Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-jf6hp Mar 2 00:20:43.955: INFO: Received response from host: affinity-nodeport-transition-jf6hp Mar 2 00:20:43.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-4073236307 --namespace=services-9572 exec execpod-affinityk768f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.12:32315/ ; done' Mar 2 00:20:44.150: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.12:32315/\n" Mar 2 00:20:44.150: INFO: stdout: "\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd\naffinity-nodeport-transition-s9lvd" Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Received response from host: affinity-nodeport-transition-s9lvd Mar 2 00:20:44.150: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9572, will wait for the garbage collector to delete the pods Mar 2 00:20:44.224: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.583018ms Mar 2 00:20:44.325: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.584752ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:46.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9572" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 • [SLOW TEST:70.573 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":272,"failed":0} Mar 2 00:20:46.681: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:46.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:46.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:46.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:46.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:46.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:46.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:46.681: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:13.369: INFO: >>> kubeConfig: /tmp/kubeconfig-2945017463 STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: creating the pod with failed condition STEP: updating the pod Mar 2 00:20:13.923: INFO: Successfully updated pod "var-expansion-7ecef8b5-46bd-467f-9266-0932087a8173" STEP: waiting for pod running STEP: deleting the pod gracefully Mar 2 00:20:15.931: INFO: Deleting pod "var-expansion-7ecef8b5-46bd-467f-9266-0932087a8173" in namespace "var-expansion-1247" Mar 2 00:20:15.949: INFO: Wait up to 5m0s for pod "var-expansion-7ecef8b5-46bd-467f-9266-0932087a8173" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:20:47.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1247" for this suite. • [SLOW TEST:154.606 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0} Mar 2 00:20:47.976: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:47.976: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:47.976: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:47.976: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:47.976: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:47.976: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:47.976: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:47.976: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:40.593: INFO: >>> kubeConfig: /tmp/kubeconfig-343363664 STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 STEP: Creating service test in namespace statefulset-5924 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a new StatefulSet Mar 2 00:18:40.641: INFO: Found 0 stateful pods, waiting for 3 Mar 2 00:18:50.645: INFO: Found 1 stateful pods, waiting for 3 Mar 2 00:19:00.647: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:19:10.647: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:19:20.648: INFO: Found 2 stateful pods, waiting for 3 Mar 2 00:19:30.648: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:30.648: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:30.648: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:19:40.670: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:40.670: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:40.670: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:19:50.646: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:50.646: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:19:50.646: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:20:00.647: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:20:00.647: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:20:00.647: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:20:00.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-343363664 --namespace=statefulset-5924 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:20:00.792: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:20:00.792: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:20:00.792: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Mar 2 00:20:20.826: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 2 00:20:30.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-343363664 --namespace=statefulset-5924 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:30.996: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 2 00:20:30.996: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 2 00:20:30.996: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Mar 2 00:20:41.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-343363664 --namespace=statefulset-5924 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:20:41.095: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:20:41.095: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:20:41.095: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 2 00:20:51.131: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 2 00:21:01.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-343363664 --namespace=statefulset-5924 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:21:01.259: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 2 00:21:01.259: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 2 00:21:01.259: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Mar 2 00:21:11.284: INFO: Deleting all statefulset in ns statefulset-5924 Mar 2 00:21:11.288: INFO: Scaling statefulset ss2 to 0 Mar 2 00:21:21.306: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:21:21.311: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:21:21.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5924" for this suite. • [SLOW TEST:160.746 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":7,"skipped":142,"failed":0} Mar 2 00:21:21.340: INFO: Running AfterSuite actions on all nodes Mar 2 00:21:21.340: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:21:21.340: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:21:21.340: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:21:21.340: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:21:21.340: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:21:21.340: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:21:21.340: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:16:30.762: INFO: >>> kubeConfig: /tmp/kubeconfig-70536112 STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:21:30.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-4082" for this suite. • [SLOW TEST:300.049 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0} Mar 2 00:21:30.811: INFO: Running AfterSuite actions on all nodes Mar 2 00:21:30.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:21:30.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:21:30.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:21:30.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:21:30.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:21:30.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:21:30.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:20:02.585: INFO: >>> kubeConfig: /tmp/kubeconfig-477825556 STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:22:00.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-2301" for this suite. • [SLOW TEST:118.065 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":12,"skipped":290,"failed":0} Mar 2 00:22:00.650: INFO: Running AfterSuite actions on all nodes Mar 2 00:22:00.650: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:22:00.650: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:22:00.650: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:22:00.650: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:22:00.650: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:22:00.650: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:22:00.650: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:15:05.196: INFO: >>> kubeConfig: /tmp/kubeconfig-3213862209 STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 STEP: Creating service test in namespace statefulset-3679 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating stateful set ss in namespace statefulset-3679 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3679 Mar 2 00:15:05.214: INFO: Found 0 stateful pods, waiting for 1 Mar 2 00:15:15.226: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:15:25.217: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:15:35.218: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:15:45.218: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:15:55.218: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:16:05.218: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 2 00:16:15.218: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 2 00:16:15.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:16:15.564: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:16:15.564: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:16:15.564: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 2 00:16:15.568: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 2 00:16:25.572: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 2 00:16:35.571: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 2 00:16:45.572: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 2 00:16:45.572: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:16:45.583: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:16:45.583: INFO: ss-0 k3s-server-1-mid5l7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:16:45.583: INFO: Mar 2 00:16:45.583: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 2 00:16:46.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997506873s Mar 2 00:16:47.590: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993722653s Mar 2 00:16:48.593: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989880952s Mar 2 00:16:49.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98727857s Mar 2 00:16:50.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.983674965s Mar 2 00:16:51.603: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.980968992s Mar 2 00:16:52.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.977656535s Mar 2 00:16:53.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.973527055s Mar 2 00:16:54.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 969.973015ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3679 Mar 2 00:16:55.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:16:55.747: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 2 00:16:55.747: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 2 00:16:55.747: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 2 00:16:55.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:16:56.203: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Mar 2 00:16:56.203: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 2 00:16:56.203: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 2 00:16:56.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:16:56.326: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Mar 2 00:16:56.326: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 2 00:16:56.326: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 2 00:16:56.336: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 2 00:17:06.340: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 2 00:17:16.341: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:17:16.341: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 2 00:17:16.341: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 2 00:17:16.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:17:16.472: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:17:16.472: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:17:16.472: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 2 00:17:16.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:17:16.548: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:17:16.548: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:17:16.548: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 2 00:17:16.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 2 00:17:16.621: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 2 00:17:16.621: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 2 00:17:16.621: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 2 00:17:16.621: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:17:16.624: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 2 00:17:26.627: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 2 00:17:36.632: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Mar 2 00:17:46.631: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 2 00:17:46.631: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 2 00:17:46.631: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 2 00:17:46.640: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:46.640: INFO: ss-1 k3s-agent-1-mid5l7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:46.640: INFO: ss-2 k3s-server-1-mid5l7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:46.640: INFO: ss-0 k3s-server-1-mid5l7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:46.640: INFO: Mar 2 00:17:46.640: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 2 00:17:47.644: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:47.644: INFO: ss-2 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:47.644: INFO: ss-1 k3s-agent-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:47.644: INFO: ss-0 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:47.644: INFO: Mar 2 00:17:47.644: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 2 00:17:48.652: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:48.653: INFO: ss-2 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:48.653: INFO: ss-1 k3s-agent-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:48.653: INFO: ss-0 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:48.653: INFO: Mar 2 00:17:48.653: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 2 00:17:49.659: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:49.659: INFO: ss-2 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:49.659: INFO: ss-1 k3s-agent-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:49.659: INFO: ss-0 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:49.659: INFO: Mar 2 00:17:49.659: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 2 00:17:50.663: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:50.663: INFO: ss-2 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:50.663: INFO: ss-1 k3s-agent-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:50.663: INFO: ss-0 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:50.663: INFO: Mar 2 00:17:50.663: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 2 00:17:51.667: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:51.667: INFO: ss-2 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:51.667: INFO: ss-1 k3s-agent-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:51.667: INFO: ss-0 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:51.667: INFO: Mar 2 00:17:51.667: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 2 00:17:52.671: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:52.672: INFO: ss-2 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:52.672: INFO: ss-1 k3s-agent-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:52.672: INFO: ss-0 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:52.672: INFO: Mar 2 00:17:52.672: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 2 00:17:53.676: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:53.676: INFO: ss-2 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:53.676: INFO: ss-1 k3s-agent-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:53.676: INFO: ss-0 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:53.676: INFO: Mar 2 00:17:53.676: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 2 00:17:54.680: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:54.680: INFO: ss-2 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:54.680: INFO: ss-1 k3s-agent-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:54.680: INFO: ss-0 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:54.680: INFO: Mar 2 00:17:54.680: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 2 00:17:55.685: INFO: POD NODE PHASE GRACE CONDITIONS Mar 2 00:17:55.685: INFO: ss-2 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:55.685: INFO: ss-1 k3s-agent-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:16:45 +0000 UTC }] Mar 2 00:17:55.685: INFO: ss-0 k3s-server-1-mid5l7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:17:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-02 00:15:05 +0000 UTC }] Mar 2 00:17:55.685: INFO: Mar 2 00:17:55.685: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3679 Mar 2 00:17:56.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:17:56.741: INFO: rc: 1 Mar 2 00:17:56.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 2 00:18:06.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:18:06.799: INFO: rc: 1 Mar 2 00:18:06.799: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 2 00:18:16.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:18:16.854: INFO: rc: 1 Mar 2 00:18:16.854: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 2 00:18:26.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:18:26.910: INFO: rc: 1 Mar 2 00:18:26.910: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 2 00:18:36.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:18:36.966: INFO: rc: 1 Mar 2 00:18:36.966: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 2 00:18:46.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:18:47.013: INFO: rc: 1 Mar 2 00:18:47.013: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:18:57.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:18:57.061: INFO: rc: 1 Mar 2 00:18:57.061: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:19:07.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:19:07.113: INFO: rc: 1 Mar 2 00:19:07.113: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:19:17.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:19:17.160: INFO: rc: 1 Mar 2 00:19:17.160: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:19:27.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:19:27.233: INFO: rc: 1 Mar 2 00:19:27.233: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:19:37.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:19:37.354: INFO: rc: 1 Mar 2 00:19:37.354: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:19:47.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:19:47.421: INFO: rc: 1 Mar 2 00:19:47.421: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:19:57.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:19:57.468: INFO: rc: 1 Mar 2 00:19:57.468: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:20:07.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:07.514: INFO: rc: 1 Mar 2 00:20:07.514: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:20:17.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:17.563: INFO: rc: 1 Mar 2 00:20:17.563: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:20:27.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:27.614: INFO: rc: 1 Mar 2 00:20:27.614: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:20:37.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:37.664: INFO: rc: 1 Mar 2 00:20:37.664: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:20:47.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:47.712: INFO: rc: 1 Mar 2 00:20:47.712: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:20:57.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:20:57.760: INFO: rc: 1 Mar 2 00:20:57.760: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:21:07.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:21:07.805: INFO: rc: 1 Mar 2 00:21:07.805: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:21:17.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:21:17.851: INFO: rc: 1 Mar 2 00:21:17.851: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:21:27.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:21:27.900: INFO: rc: 1 Mar 2 00:21:27.900: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:21:37.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:21:37.950: INFO: rc: 1 Mar 2 00:21:37.950: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:21:47.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:21:47.998: INFO: rc: 1 Mar 2 00:21:47.998: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:21:58.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:21:58.049: INFO: rc: 1 Mar 2 00:21:58.049: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:22:08.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:22:08.101: INFO: rc: 1 Mar 2 00:22:08.101: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:22:18.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:22:18.152: INFO: rc: 1 Mar 2 00:22:18.152: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:22:28.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:22:28.201: INFO: rc: 1 Mar 2 00:22:28.201: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:22:38.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:22:38.251: INFO: rc: 1 Mar 2 00:22:38.251: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:22:48.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:22:48.301: INFO: rc: 1 Mar 2 00:22:48.301: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 2 00:22:58.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3213862209 --namespace=statefulset-3679 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 2 00:22:58.351: INFO: rc: 1 Mar 2 00:22:58.351: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Mar 2 00:22:58.351: INFO: Scaling statefulset ss to 0 Mar 2 00:22:58.366: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Mar 2 00:22:58.369: INFO: Deleting all statefulset in ns statefulset-3679 Mar 2 00:22:58.372: INFO: Scaling statefulset ss to 0 Mar 2 00:22:58.384: INFO: Waiting for statefulset status.replicas updated to 0 Mar 2 00:22:58.386: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:22:58.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3679" for this suite. • [SLOW TEST:473.219 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":2,"skipped":64,"failed":0} Mar 2 00:22:58.416: INFO: Running AfterSuite actions on all nodes Mar 2 00:22:58.416: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:22:58.416: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:22:58.416: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:22:58.416: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:22:58.416: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:22:58.416: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:22:58.416: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:18:22.345: INFO: >>> kubeConfig: /tmp/kubeconfig-1605033327 STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod test-webserver-2480af95-8f55-42eb-8ed5-95eb35e96f32 in namespace container-probe-6202 Mar 2 00:19:08.374: INFO: Started pod test-webserver-2480af95-8f55-42eb-8ed5-95eb35e96f32 in namespace container-probe-6202 STEP: checking the pod's current state and verifying that restartCount is present Mar 2 00:19:08.377: INFO: Initial restart count of pod test-webserver-2480af95-8f55-42eb-8ed5-95eb35e96f32 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:23:09.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6202" for this suite. • [SLOW TEST:286.836 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":83,"failed":0} Mar 2 00:23:09.181: INFO: Running AfterSuite actions on all nodes Mar 2 00:23:09.181: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:23:09.181: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:23:09.181: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:23:09.181: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:23:09.181: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:23:09.181: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:23:09.181: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:20:01.412: INFO: >>> kubeConfig: /tmp/kubeconfig-3825818549 STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating pod busybox-22ae6165-264d-44e4-ba3c-c31b1daafb72 in namespace container-probe-2740 Mar 2 00:20:19.455: INFO: Started pod busybox-22ae6165-264d-44e4-ba3c-c31b1daafb72 in namespace container-probe-2740 STEP: checking the pod's current state and verifying that restartCount is present Mar 2 00:20:19.459: INFO: Initial restart count of pod busybox-22ae6165-264d-44e4-ba3c-c31b1daafb72 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:24:20.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2740" for this suite. • [SLOW TEST:258.697 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":115,"failed":0} Mar 2 00:24:20.109: INFO: Running AfterSuite actions on all nodes Mar 2 00:24:20.109: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:24:20.109: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:24:20.109: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:24:20.109: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:24:20.109: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:24:20.110: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:24:20.110: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 2 00:19:39.532: INFO: >>> kubeConfig: /tmp/kubeconfig-2889160273 STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 2 00:25:01.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1871" for this suite. • [SLOW TEST:322.142 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":6,"skipped":202,"failed":0} Mar 2 00:25:01.674: INFO: Running AfterSuite actions on all nodes Mar 2 00:25:01.674: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:25:01.674: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:25:01.674: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:25:01.674: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:25:01.674: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:25:01.674: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:25:01.674: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":9,"skipped":201,"failed":0} Mar 2 00:20:23.447: INFO: Running AfterSuite actions on all nodes Mar 2 00:20:23.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 Mar 2 00:20:23.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Mar 2 00:20:23.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Mar 2 00:20:23.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Mar 2 00:20:23.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Mar 2 00:20:23.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Mar 2 00:20:23.447: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 Mar 2 00:25:01.721: INFO: Running AfterSuite actions on node 1 Mar 2 00:25:01.721: INFO: Skipping dumping logs from cluster Ran 325 of 7052 Specs in 600.266 seconds SUCCESS! -- 325 Passed | 0 Failed | 0 Pending | 6727 Skipped Ginkgo ran 1 suite in 10m3.309432668s Test Suite Passed