Панель управления kubernetes не работает: ошибка: "EOF, попытка доступа:" http://10.10.85.2:53/ "

Я использовал бинарный пакет для установки kubernetes master ha на centos 7, три главных узла и три миньона.

Я следовал этому руководству coredns, чтобы установить kube-dns и ./deploy.sh 10.100.0.0/16 cluster.local | kubectl apply -f -

последнее, приборная панель не работает и выдает ошибку, как показано ниже :

Error: 'EOF' Trying to reach: 'http://10.10.85.2:53/'

по мастеру:

[root@iZuf69az6mflbck93u847cZ ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.2.93:6443
CoreDNS is running at https://192.168.2.93:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[root@iZuf69az6mflbck93u847cZ ~]# systemctl status kube-apiserver -l
● kube-apiserver.service - Kubernetes API Server
    Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
    Active: active (running) since Thu 2018-08-09 13:34:40 CST; 20h ago
      Docs: https://kubernetes.io/docs/concepts/overview
Main PID: 25178 (kube-apiserver)
  CGroup: /system.slice/kube-apiserver.service
       └─25178 /usr/local/bin/kube-apiserver --storage-backend=etcd3 --etcd-servers=http://192.168.2.86:2379,http://192.168.2.87:2379,http://192.168.2.88:2379 --insecure-bind-address=192.168.2.86 --bind-address=0.0.0.0 --secure-port=6443 --service-cluster-ip-range=10.100.0.0/16 --service-node-port-range=30000-65535 --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --basic-auth-file=/etc/kubernetes/basic_auth_file --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --logtostderr=false --log-dir=/var/log/kubernetes --v=0 --allow-privileged=true

Aug 09 13:34:35 iZuf69az6mflbck93u847cZ kube-apiserver[25178]: [restful] 2018/08/09 13:34:35 log.go:33: [restful/swagger] https://192.168.2.86:6443/swaggerui/ is mapped to folder /swagger-ui/
Aug 09 13:34:36 iZuf69az6mflbck93u847cZ kube-apiserver[25178]: [restful] 2018/08/09 13:34:36 log.go:33: [restful/swagger] listing is available at https://192.168.2.86:6443/swaggerapi
Aug 09 13:34:36 iZuf69az6mflbck93u847cZ kube-apiserver[25178]: [restful] 2018/08/09 13:34:36 log.go:33: [restful/swagger] https://192.168.2.86:6443/swaggerui/ is mapped to folder /swagger-ui/
Aug 09 13:34:40 iZuf69az6mflbck93u847cZ systemd[1]: Started Kubernetes API Server.
Aug 09 13:51:58 iZuf69az6mflbck93u847cZ kube-apiserver[25178]: E0809 13:51:58.130303   25178 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
Aug 09 14:04:45 iZuf69az6mflbck93u847cZ kube-apiserver[25178]: E0809 14:04:45.138555   25178 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
Aug 09 14:18:44 iZuf69az6mflbck93u847cZ kube-apiserver[25178]: E0809 14:18:44.195958   25178 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted

[root@iZuf69az6mflbck93u847cZ ~]# systemctl status kube-controller-manager -l
● kube-controller-manager.service
    Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
    Active: active (running) since Thu 2018-08-09 09:49:48 CST; 24h ago
Main PID: 18963 (kube-controller)
  CGroup: /system.slice/kube-controller-manager.service
       └─18963 /usr/local/bin/kube-controller-manager --address=127.0.0.1 --master=http://192.168.2.86:8080 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true --v=0

Aug 09 14:04:45 iZuf69az6mflbck93u847cZ kube-controller-manager[18963]: W0809 14:04:45.138760   18963 reflector.go:341] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: watch of *v1beta1.Event ended with: The resourceVersion for the provided watch is too old.
Aug 09 14:09:19 iZuf69az6mflbck93u847cZ kube-controller-manager[18963]: I0809 14:09:19.828878   18963 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kubernetes-dashboard", UID:"c0e38fdd-9b9a-11e8-9209-00163e025f2f", APIVersion:"apps/v1", ResourceVersion:"83360", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-c7496bcf to 1
Aug 09 14:09:19 iZuf69az6mflbck93u847cZ kube-controller-manager[18963]: I0809 14:09:19.839412   18963 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kubernetes-dashboard-c7496bcf", UID:"c0e4332b-9b9a-11e8-80b3-00163e0e0d6d", APIVersion:"apps/v1", ResourceVersion:"83361", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-c7496bcf-b28rr
Aug 10 02:24:59 iZuf69az6mflbck93u847cZ kube-controller-manager[18963]: W0810 02:24:59.348618   18963 reflector.go:341] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: watch of *v1beta1.Event ended with: The resourceVersion for the provided watch is too old.
Aug 10 04:20:01 iZuf69az6mflbck93u847cZ kube-controller-manager[18963]: W0810 04:20:01.448973   18963 reflector.go:341] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: watch of *v1beta1.Event ended with: The resourceVersion for the provided watch is too old.

[root@iZuf69az6mflbck93u847cZ ~]# kubectl get nodes
NAME                      STATUS    ROLES     AGE       VERSION
izuf68thdbm0n4j5qywd7qz   Ready     <none>    22h       v1.11.0
izuf68thdbm0n4j5qywd7rz   Ready     <none>    22h       v1.11.0
izuf68thdbm0n4j5qywd7sz   Ready     <none>    23h       v1.11.0

[root@iZuf69az6mflbck93u847cZ ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"} 

[root@iZuf69az6mflbck93u847cZ ~]# kubectl get -o wide pods -n kube-system 
NAME                                  READY     STATUS    RESTARTS   AGE      IP           NODE
coredns-5b6cd55cf8-6tjdl              1/1       Running   1          22h       10.10.85.2   izuf68thdbm0n4j5qywd7sz
kubernetes-dashboard-c7496bcf-b28rr   1/1       Running   0          19h      10.10.85.3   izuf68thdbm0n4j5qywd7sz

[root@iZuf69az6mflbck93u847cZ ~]# kubectl get endpoints -n kube-system
NAME                      ENDPOINTS                     AGE
kube-controller-manager   <none>                        1d
kube-dns                  10.10.85.2:53,10.10.85.2:53   22h
kube-scheduler            <none>                        1d
kubernetes-dashboard      10.10.85.3:8443               19h

[root@iZuf69az6mflbck93u847cZ ~]# telnet 10.10.85.2 53
Trying 10.10.85.2...
Connected to 10.10.85.2.
Escape character is '^]'.
Connection closed by foreign host.

на миньоне izuf68thdbm0n4j5qywd7sz:

# systemctl status kubelet -l
● kubelet.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2018-08-09 12:38:43 CST; 21h ago
     Docs: https://kubernetes.io/doc
 Main PID: 3444 (kubelet)
    Memory: 49.2M
    CGroup: /system.slice/kubelet.service
       └─3444 /usr/local/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0 --cluster-dns=10.100.0.100 --cluster-domain=cluster.local. --resolv-conf=/etc/resolv.conf

Aug 10 09:56:26 iZuf68thdbm0n4j5qywd7sZ kubelet[3444]: E0810 09:56:26.924371    3444 streamwatcher.go:109] Unable to decode an event from the watch stream: stream error: stream ID 43959; INTERNAL_ERROR
Aug 10 09:56:26 iZuf68thdbm0n4j5qywd7sZ kubelet[3444]: E0810 09:56:26.924370    3444 streamwatcher.go:109] Unable to decode an event from the watch stream: stream error: stream ID 43961; INTERNAL_ERROR

[root@iZuf68thdbm0n4j5qywd7sZ ~]# systemctl status kube-proxy -l
● kube-proxy.service - Kubernetes kubelet agent
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2018-08-09 12:38:51 CST; 21h ago
     Docs: https://kubernetes.io/doc
Main PID: 3525 (kube-proxy)
  Memory: 5.2M
  CGroup: /system.slice/kube-proxy.service
       ‣ 3525 /usr/local/bin/kube-proxy --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --proxy-mode=iptables --cluster-cidr=10.100.0.0/16

Aug 10 09:57:35 iZuf68thdbm0n4j5qywd7sZ kube-proxy[3525]: E0810 09:57:35.383308    3525 streamwatcher.go:109] Unable to decode an event from the watch stream: stream error: stream ID 2981; INTERNAL_ERROR
Aug 10 09:58:35 iZuf68thdbm0n4j5qywd7sZ kube-proxy[3525]: E0810 09:58:35.496183    3525 streamwatcher.go:109] Unable to decode an event from the watch stream: stream error: stream ID 2983; INTERNAL_ERROR

получить услугу по мастеру:

[root@iZuf69az6mflbck93u847cZ ~]# kubectl get service -o wide -n kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         
AGE       SELECTOR
kube-dns               ClusterIP   10.100.0.100    <none>        53/UDP,53/TCP   1d        k8s-app=coredns
kubernetes-dashboard   ClusterIP   10.100.226.37   <none>        443/TCP         2d        k8s-app=kubernetes-dashboard

[root@iZuf69az6mflbck93u847cZ ~]# kubectl get deployment kubernetes- dashboard -o yaml -n kube-system
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2018-08-10T05:38:45Z
  generation: 1
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "236080"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/kubernetes-dashboard
  uid: a5eaaf89-9c5f-11e8-9209-00163e025f2f
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
  k8s-app: kubernetes-dashboard
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - args:
        - --auto-generate-certificates
        image: reg.xxx.cn/pub/kubernetes-dashboard-amd64:v1.8.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 8443
            scheme: HTTPS
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30
        name: kubernetes-dashboard
        ports:
        - containerPort: 8443
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /certs
          name: kubernetes-dashboard-certs
        - mountPath: /tmp
          name: tmp-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kubernetes-dashboard
      serviceAccountName: kubernetes-dashboard
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          defaultMode: 420
          secretName: kubernetes-dashboard-certs
      - emptyDir: {}
        name: tmp-volume
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-08-10T05:38:45Z
    lastUpdateTime: 2018-08-10T05:38:47Z
    message: ReplicaSet "kubernetes-dashboard-c7496bcf" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: 2018-08-10T08:22:21Z
    lastUpdateTime: 2018-08-10T08:22:21Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

не могли бы вы предоставить результаты услуг kubectl get, а также опубликовать файл dashboard.yaml?

aurelius 10.08.2018 16:59

@aurelius Да, я написал для подробностей. Благодарность!

Nicholas.guo 14.08.2018 04:43

Сторона конфигурации вроде в порядке. Подскажите, пожалуйста, как вы пытаетесь получить доступ к панели управления и по какому адресу и порту? Вы следовали некоторому руководству? Также вы можете проверить мой ответ о доступе к панели управления - возможно, он даст вам несколько подсказок, прежде чем мы сможем получить достаточно информации, чтобы помочь вам: stackoverflow.com/questions/51253016/…

aurelius 17.08.2018 15:51

@aurelius Спасибо за вашу помощь, я подробно прочитал вашу ссылку и попробовал , но у меня это не сработало , Я захожу в панель управления: http://192.168.2.86:8001/api/v1/namespaces/kube-system/servi‌​ces/https:kubernetes‌​-dashboard:/proxy/#!‌​/login , после получения токена и применения токена в браузере , не удается войти в систему и не отображается любые сообщения об ошибках

Nicholas.guo 30.08.2018 05:06

@aurelius Проблема была решена, еще раз спасибо

Nicholas.guo 12.09.2018 03:51

У меня такая же проблема, как ее решить?

yasin lachini 28.05.2020 00:45
Развертывание модели машинного обучения с помощью Flask - Angular в Kubernetes
Развертывание модели машинного обучения с помощью Flask - Angular в Kubernetes
Kubernetes - это портативная, расширяемая платформа с открытым исходным кодом для управления контейнерными рабочими нагрузками и сервисами, которая...
1
6
1 978
0

Другие вопросы по теме