Ошибка CrushLoopBackOff при запуске панели управления k8s

Я пытаюсь установить панель инструментов, чтобы очистить частный кластер k8s (без подключения к Интернету). Делал по этой инструкции https://github.com/kubernetes/приборная панель. При запуске применить recomended.yaml: сборщик метрик запускается успешно, но на панели постоянно отображается ошибка CrashLoopBackOff.

Версия докера: 19.03.6 Версия К8с: 1.23.4

Состояние контейнеров:

user@k8s-master1:~/images$ sudo kubectl get pods --all-namespaces -o wide
NAMESPACE              NAME                                         READY   STATUS             RESTARTS      AGE   IP             NODE              NOMINATED NODE   READINESS GATES
kube-system            coredns-64897985d-9kgwl                      1/1     Running            0             39h   10.224.0.3     k8s-master1   <none>           <none>
kube-system            coredns-64897985d-kmcvf                      1/1     Running            0             39h   10.224.0.2     k8s-master1   <none>           <none>
kube-system            etcd-k8s-master1                             1/1     Running            3             39h   10.12.21.157   k8s-master1   <none>           <none>
kube-system            kube-apiserver-k8s-master1                   1/1     Running            3             39h   10.12.21.157   k8s-master1   <none>           <none>
kube-system            kube-controller-manager-k8s-master1          1/1     Running            2             39h   10.12.21.157   k8s-master1   <none>           <none>
kube-system            kube-flannel-ds-5cqrc                        1/1     Running            1 (15h ago)   39h   10.12.21.165   k8s-worker1   <none>           <none>
kube-system            kube-flannel-ds-8xfjt                        1/1     Running            0             39h   10.12.21.157   k8s-master1   <none>           <none>
kube-system            kube-proxy-77m6t                             1/1     Running            1 (15h ago)   39h   10.12.21.165   k8s-worker1.era   <none>           <none>
kube-system            kube-proxy-zslrc                             1/1     Running            0             39h   10.12.21.157   k8s-master1.era   <none>           <none>
kube-system            kube-scheduler-k8s-master1                   1/1     Running            3             39h   10.12.21.157   k8s-master1.era   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-799d786dbf-ww8d2   1/1     Running            0             21m   10.224.1.33    k8s-worker1.era   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-7b65cf66c4-5n4bl        0/1     CrashLoopBackOff   8 (56s ago)   21m   10.224.1.34    k8s-worker1.era   <none>           <none>

Журнал контейнеров:

user@k8s-master1:~/images$ sudo kubectl logs kubernetes-dashboard-7b65cf66c4-5n4bl --namespace = "kubernetes-dashboard" --tail=-1 --follow=true
2022/03/23 05:37:23 Starting overwatch
2022/03/23 05:37:23 Using namespace: kubernetes-dashboard
2022/03/23 05:37:23 Using in-cluster config to connect to apiserver
2022/03/23 05:37:23 Using secret token for csrf signing
2022/03/23 05:37:23 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00055faf0)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000468180)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x194fa64)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
        /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594
main.main()
        /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x1cf


user@k8s-master1:~/images$ sudo kubectl logs dashboard-metrics-scraper-799d786dbf-ww8d2 --namespace = "kubernetes-dashboard" --tail=-1 --follow=true
{"level":"info","msg":"Kubernetes host: https://10.96.0.1:443","time":"2022-03-23T05:17:21Z"}
{"level":"info","msg":"Namespace(s): []","time":"2022-03-23T05:17:21Z"}
10.224.1.1 - - [23/Mar/2022:05:18:00 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.23"
10.224.1.1 - - [23/Mar/2022:05:18:10 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.23"
10.224.1.1 - - [23/Mar/2022:05:18:20 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.23"
10.224.1.1 - - [23/Mar/2022:05:18:30 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.23"
10.224.1.1 - - [23/Mar/2022:05:18:40 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.23"
10.224.1.1 - - [23/Mar/2022:05:18:50 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.23"

Журналы рабочих узлов из /var/log/syslog:

Mar 22 17:39:06 k8s-worker1 NetworkManager[945]: <info>  [1647959946.6236] connectivity: (ens3) timed out
Mar 22 17:39:15 k8s-worker1 kubelet[1514]: I0322 17:39:15.874414    1514 topology_manager.go:200] "Topology Admit Handler"
Mar 22 17:39:15 k8s-worker1 systemd[1]: Created slice libcontainer container kubepods-besteffort-pod7fc08bc9_9992_4f8d_9a03_6ab174479715.slice.
Mar 22 17:39:15 k8s-worker1 kubelet[1514]: I0322 17:39:15.890731    1514 topology_manager.go:200] "Topology Admit Handler"
Mar 22 17:39:15 k8s-worker1 systemd[1]: Created slice libcontainer container kubepods-besteffort-poda226e365_e55c_438a_b31f_9fb54ec2c0cd.slice.
Mar 22 17:39:15 k8s-worker1 kubelet[1514]: I0322 17:39:15.969404    1514 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7fc08bc9-9992-4f8d-9a03-6ab174479715-tmp-volume\") pod \"kubernetes-dashboard-7b65cf66c4-5cp59\" (UID: \"7fc08bc9-9992-4f8d-9a03-6ab174479715\") " pod = "kubernetes-dashboard/kubernetes-dashboard-7b65cf66c4-5cp59"
Mar 22 17:39:15 k8s-worker1 kubelet[1514]: I0322 17:39:15.969446    1514 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlh9b\" (UniqueName: \"kubernetes.io/projected/7fc08bc9-9992-4f8d-9a03-6ab174479715-kube-api-access-tlh9b\") pod \"kubernetes-dashboard-7b65cf66c4-5cp59\" (UID: \"7fc08bc9-9992-4f8d-9a03-6ab174479715\") " pod = "kubernetes-dashboard/kubernetes-dashboard-7b65cf66c4-5cp59"
Mar 22 17:39:15 k8s-worker1 kubelet[1514]: I0322 17:39:15.969468    1514 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a226e365-e55c-438a-b31f-9fb54ec2c0cd-tmp-volume\") pod \"dashboard-metrics-scraper-799d786dbf-6x7b5\" (UID: \"a226e365-e55c-438a-b31f-9fb54ec2c0cd\") " pod = "kubernetes-dashboard/dashboard-metrics-scraper-799d786dbf-6x7b5"
Mar 22 17:39:15 k8s-worker1 kubelet[1514]: I0322 17:39:15.969489    1514 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-certs\" (UniqueName: \"kubernetes.io/secret/7fc08bc9-9992-4f8d-9a03-6ab174479715-kubernetes-dashboard-certs\") pod \"kubernetes-dashboard-7b65cf66c4-5cp59\" (UID: \"7fc08bc9-9992-4f8d-9a03-6ab174479715\") " pod = "kubernetes-dashboard/kubernetes-dashboard-7b65cf66c4-5cp59"
Mar 22 17:39:15 k8s-worker1 kubelet[1514]: I0322 17:39:15.969508    1514 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2vd\" (UniqueName: \"kubernetes.io/projected/a226e365-e55c-438a-b31f-9fb54ec2c0cd-kube-api-access-9g2vd\") pod \"dashboard-metrics-scraper-799d786dbf-6x7b5\" (UID: \"a226e365-e55c-438a-b31f-9fb54ec2c0cd\") " pod = "kubernetes-dashboard/dashboard-metrics-scraper-799d786dbf-6x7b5"
Mar 22 17:39:16 k8s-worker1 containerd[1357]: time = "2022-03-22T17:39:16.322790061+03:00" level=info msg = "shim containerd-shim started" address=/containerd-shim/883437e7a51b5599bab43f814d5c337b9fe3c2751e24c906c656ee8eac8256bd.sock debug=false pid=9001
Mar 22 17:39:16 k8s-worker1 containerd[1357]: time = "2022-03-22T17:39:16.324394320+03:00" level=info msg = "shim containerd-shim started" address=/containerd-shim/24b06da95eb0fcb204fcddd18385706898f16ea49f66eb072883057290b6250f.sock debug=false pid=9006
Mar 22 17:39:16 k8s-worker1 systemd[1]: Started libcontainer container f9e95ee9b501cd765b93c370bfa58dd38c0281f627c604fc537d5bfae075e4f5.
Mar 22 17:39:16 k8s-worker1 systemd[1]: Started libcontainer container e5c0d5721a2f52e0d6fae6818447eda8578b45b51ef6e0a497d460c1eff6579c.
Mar 22 17:39:16 k8s-worker1 kernel: [  667.907330] IPv6: ADDRCONF(NETDEV_CHANGE): veth4ea29f8d: link becomes ready
Mar 22 17:39:16 k8s-worker1 NetworkManager[945]: <info>  [1647959956.5808] device (veth4ea29f8d): carrier: link connected
Mar 22 17:39:16 k8s-worker1 systemd-udevd[9166]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 22 17:39:16 k8s-worker1 systemd-udevd[9166]: Could not generate persistent MAC address for veth4ea29f8d: No such file or directory
Mar 22 17:39:16 k8s-worker1 NetworkManager[945]: <info>  [1647959956.5817] manager: (veth4ea29f8d): new Veth device (/org/freedesktop/NetworkManager/Devices/8)
Mar 22 17:39:16 k8s-worker1 systemd-udevd[9167]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 22 17:39:16 k8s-worker1 systemd-udevd[9167]: Could not generate persistent MAC address for veth7259e59e: No such file or directory
Mar 22 17:39:16 k8s-worker1 NetworkManager[945]: <info>  [1647959956.5823] device (veth7259e59e): carrier: link connected
Mar 22 17:39:16 k8s-worker1 NetworkManager[945]: <info>  [1647959956.5827] manager: (veth7259e59e): new Veth device (/org/freedesktop/NetworkManager/Devices/9)
Mar 22 17:39:16 k8s-worker1 NetworkManager[945]: <info>  [1647959956.5830] device (cni0): carrier: link connected
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955524] cni0: port 1(veth4ea29f8d) entered blocking state
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955526] cni0: port 1(veth4ea29f8d) entered disabled state
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955579] device veth4ea29f8d entered promiscuous mode
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955610] cni0: port 1(veth4ea29f8d) entered blocking state
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955612] cni0: port 1(veth4ea29f8d) entered forwarding state
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955816] cni0: port 2(veth7259e59e) entered blocking state
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955818] cni0: port 2(veth7259e59e) entered disabled state
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955871] device veth7259e59e entered promiscuous mode
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955888] cni0: port 2(veth7259e59e) entered blocking state
Mar 22 17:39:16 k8s-worker1 kernel: [  667.955889] cni0: port 2(veth7259e59e) entered forwarding state
Mar 22 17:39:16 k8s-worker1 NetworkManager[945]: <info>  [1647959956.5902] devices added (path: /sys/devices/virtual/net/veth7259e59e, iface: veth7259e59e)
Mar 22 17:39:16 k8s-worker1 NetworkManager[945]: <info>  [1647959956.5902] device added (path: /sys/devices/virtual/net/veth7259e59e, iface: veth7259e59e): no ifupdown configuration found.
Mar 22 17:39:16 k8s-worker1 NetworkManager[945]: <info>  [1647959956.5908] devices added (path: /sys/devices/virtual/net/veth4ea29f8d, iface: veth4ea29f8d)
Mar 22 17:39:16 k8s-worker1 NetworkManager[945]: <info>  [1647959956.5908] device added (path: /sys/devices/virtual/net/veth4ea29f8d, iface: veth4ea29f8d): no ifupdown configuration found.
Mar 22 17:39:16 k8s-worker1 kubelet[1514]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.224.1.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0xf4, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000b28e8), "name":"cbr0", "type":"bridge"}
Mar 22 17:39:16 k8s-worker1 kubelet[1514]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.224.1.0/24"}]],"routes":[{"dst":"10.244.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}
Mar 22 17:39:16 k8s-worker1 kubelet[1514]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.224.1.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0xf4, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000014938), "name":"cbr0", "type":"bridge"}
Mar 22 17:39:16 k8s-worker1 containerd[1357]: time = "2022-03-22T17:39:16.752605404+03:00" level=info msg = "shim containerd-shim started" address=/containerd-shim/5e2d01620875c9a1e0f82bd5083e71aec9510daa518af452ae29d50635a5f841.sock debug=false pid=9230
Mar 22 17:39:16 k8s-worker1 containerd[1357]: time = "2022-03-22T17:39:16.755987710+03:00" level=info msg = "shim containerd-shim started" address=/containerd-shim/b57a694fc62e0e5ddc906714e76e18b06913d4d8726671d3eb1e5ad10c860140.sock debug=false pid=9244
Mar 22 17:39:16 k8s-worker1 systemd[1]: Started libcontainer container 5eda35ba11cfc0d43b1fb1065b6f28890f2741ea4999ece6c5ad9c707c1b2aae.
Mar 22 17:39:16 k8s-worker1 systemd[1]: Started libcontainer container df2a62380c546b35185ac422f3c660642f2275405c5099b3ba0ff9a117fdda61.
Mar 22 17:39:17 k8s-worker1 avahi-daemon[790]: Joining mDNS multicast group on interface veth4ea29f8d.IPv6 with address fe80::5488:49ff:fe0d:d8dc.
Mar 22 17:39:17 k8s-worker1 avahi-daemon[790]: New relevant interface veth4ea29f8d.IPv6 for mDNS.
Mar 22 17:39:17 k8s-worker1 avahi-daemon[790]: Registering new address record for fe80::5488:49ff:fe0d:d8dc on veth4ea29f8d.*.
Mar 22 17:39:17 k8s-worker1 avahi-daemon[790]: Joining mDNS multicast group on interface veth7259e59e.IPv6 with address fe80::60e1:5eff:fe1d:10c6.
Mar 22 17:39:17 k8s-worker1 avahi-daemon[790]: New relevant interface veth7259e59e.IPv6 for mDNS.
Mar 22 17:39:17 k8s-worker1 avahi-daemon[790]: Registering new address record for fe80::60e1:5eff:fe1d:10c6 on veth7259e59e.*.
Mar 22 17:39:32 k8s-worker1 systemd-resolved[501]: Using degraded feature set (UDP) for DNS server 10.12.21.2.
Mar 22 17:39:37 k8s-worker1 systemd-resolved[501]: Using degraded feature set (UDP) for DNS server 10.12.21.120.
Mar 22 17:39:46 k8s-worker1 systemd[1]: docker-5eda35ba11cfc0d43b1fb1065b6f28890f2741ea4999ece6c5ad9c707c1b2aae.scope: Consumed 51ms CPU time
Mar 22 17:39:46 k8s-worker1 containerd[1357]: time = "2022-03-22T17:39:46.878548665+03:00" level=info msg = "shim reaped" id=5eda35ba11cfc0d43b1fb1065b6f28890f2741ea4999ece6c5ad9c707c1b2aae
Mar 22 17:39:46 k8s-worker1 dockerd[1517]: time = "2022-03-22T17:39:46.888692147+03:00" level=info msg = "ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type = "*events.TaskDelete"
Mar 22 17:39:47 k8s-worker1 kubelet[1514]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.224.1.0/24"}]],"routes":[{"dst":"10.244.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}I0322 17:39:47.347140    1514 scope.go:110] "RemoveContainer" containerID = "5eda35ba11cfc0d43b1fb1065b6f28890f2741ea4999ece6c5ad9c707c1b2aae"
Mar 22 17:39:47 k8s-worker1 containerd[1357]: time = "2022-03-22T17:39:47.410154268+03:00" level=info msg = "shim containerd-shim started" address=/containerd-shim/aeed00521eedeab1bcd9c24b35bf6f3e4f7ace60e73a768a716086e00c0c4bef.sock debug=false pid=9620
Mar 22 17:39:47 k8s-worker1 systemd[1]: Started libcontainer container 8b83bcb2998b0248e79652975f0862cf5f648d60f0506ed8242b2450e27cdac3.
Mar 22 17:39:58 k8s-worker1 whoopsie[1543]: [17:39:58] Cannot reach: https://daisy.ubuntu.com
Mar 22 17:40:17 k8s-worker1 systemd[1]: docker-8b83bcb2998b0248e79652975f0862cf5f648d60f0506ed8242b2450e27cdac3.scope: Consumed 46ms CPU time
Mar 22 17:40:17 k8s-worker1 containerd[1357]: time = "2022-03-22T17:40:17.527541322+03:00" level=info msg = "shim reaped" id=8b83bcb2998b0248e79652975f0862cf5f648d60f0506ed8242b2450e27cdac3
Mar 22 17:40:17 k8s-worker1 dockerd[1517]: time = "2022-03-22T17:40:17.537731616+03:00" level=info msg = "ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type = "*events.TaskDelete"
Mar 22 17:40:18 k8s-worker1 kubelet[1514]: I0322 17:40:18.458400    1514 scope.go:110] "RemoveContainer" containerID = "5eda35ba11cfc0d43b1fb1065b6f28890f2741ea4999ece6c5ad9c707c1b2aae"
Mar 22 17:40:18 k8s-worker1 kubelet[1514]: I0322 17:40:18.458815    1514 scope.go:110] "RemoveContainer" containerID = "8b83bcb2998b0248e79652975f0862cf5f648d60f0506ed8242b2450e27cdac3"
Mar 22 17:40:18 k8s-worker1 kubelet[1514]: E0322 17:40:18.459109    1514 pod_workers.go:919] "Error syncing pod, skipping" err = "failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-7b65cf66c4-5cp59_kubernetes-dashboard(7fc08bc9-9992-4f8d-9a03-6ab174479715)\"" pod = "kubernetes-dashboard/kubernetes-dashboard-7b65cf66c4-5cp59" podUID=7fc08bc9-9992-4f8d-9a03-6ab174479715
Mar 22 17:40:25 k8s-worker1 kubelet[1514]: I0322 17:40:25.875469    1514 scope.go:110] "RemoveContainer" containerID = "8b83bcb2998b0248e79652975f0862cf5f648d60f0506ed8242b2450e27cdac3"
Mar 22 17:40:25 k8s-worker1 kubelet[1514]: E0322 17:40:25.875672    1514 pod_workers.go:919] "Error syncing pod, skipping" err = "failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-7b65cf66c4-5cp59_kubernetes-dashboard(7fc08bc9-9992-4f8d-9a03-6ab174479715)\"" pod = "kubernetes-dashboard/kubernetes-dashboard-7b65cf66c4-5cp59" podUID=7fc08bc9-9992-4f8d-9a03-6ab174479715
Mar 22 17:40:38 k8s-worker1 kubelet[1514]: I0322 17:40:38.103479    1514 scope.go:110] "RemoveContainer" containerID = "8b83bcb2998b0248e79652975f0862cf5f648d60f0506ed8242b2450e27cdac3"
Mar 22 17:40:38 k8s-worker1 containerd[1357]: time = "2022-03-22T17:40:38.174450523+03:00" level=info msg = "shim containerd-shim started" address=/containerd-shim/05e57eded5ea65217590df57468a79098fe8ca0dc53767f60a953cd740a21eeb.sock debug=false pid=10037
Mar 22 17:40:38 k8s-worker1 systemd[1]: Started libcontainer container bed40cfd651d6d9f82699aeb3b8d37712e150d0720e0f40ce03453e2ab5e8808.

Журналы для модуля зацикливания сбоев будут гораздо полезнее, чем журналы kubelet.

jordanm 22.03.2022 15:59
Стоит ли изучать PHP в 2023-2024 годах?
Стоит ли изучать PHP в 2023-2024 годах?
Привет всем, сегодня я хочу высказать свои соображения по поводу вопроса, который я уже много раз получал в своем сообществе: "Стоит ли изучать PHP в...
Поведение ключевого слова "this" в стрелочной функции в сравнении с нормальной функцией
Поведение ключевого слова "this" в стрелочной функции в сравнении с нормальной функцией
В JavaScript одним из самых запутанных понятий является поведение ключевого слова "this" в стрелочной и обычной функциях.
Приемы CSS-макетирования - floats и Flexbox
Приемы CSS-макетирования - floats и Flexbox
Здравствуйте, друзья-студенты! Готовы совершенствовать свои навыки веб-дизайна? Сегодня в нашем путешествии мы рассмотрим приемы CSS-верстки - в...
Тестирование функциональных ngrx-эффектов в Angular 16 с помощью Jest
В системе управления состояниями ngrx, совместимой с Angular 16, появились функциональные эффекты. Это здорово и делает код определенно легче для...
Концепция локализации и ее применение в приложениях React ⚡️
Концепция локализации и ее применение в приложениях React ⚡️
Локализация - это процесс адаптации приложения к различным языкам и культурным требованиям. Это позволяет пользователям получить опыт, соответствующий...
Пользовательский скаляр GraphQL
Пользовательский скаляр GraphQL
Листовые узлы системы типов GraphQL называются скалярами. Достигнув скалярного типа, невозможно спуститься дальше по иерархии типов. Скалярный тип...
0
1
38
1
Перейти к ответу Данный вопрос помечен как решенный

Ответы 1

Ответ принят как подходящий

По умолчанию контейнер панели мониторинга устанавливается на рабочем узле. В рекомендованный файл .yaml включил установку на управляющую машину: nodeName: k8s-master1. оно работает.

Окончательный файл yaml:

# apiVersion: v1
# kind: Namespace
# metadata:
#   name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

# apiVersion: v1
# kind: Secret
# metadata:
#   labels:
#     k8s-app: kubernetes-dashboard
#   name: kubernetes-dashboard-certs
#   namespace: kubernetes-dashboard
# type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      nodeName: k8s-master1
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.5.1
          imagePullPolicy: Never
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      nodeName: k8s-master1
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

Другие вопросы по теме