K8s集群节点间通信no route to host

现象描述

节点及Master节点网络时通时不通,相互ping时通时不通

[root@k8s-node01 ~]# ping 192.168.0.8
connect: No route to host
[root@k8s-master ~]# ping 192.168.1.8
connect: No route to host

节点一会处于Ready状态,一会处于NotReady状态

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   2d2h   v1.15.0
k8s-node01   NotReady   <none>   142m   v1.15.0
k8s-node02   Ready      <none>   142m   v1.15.0
k8s-node03   NotReady   <none>   127m   v1.15.0

通过kubernetes-dashboard -> 守护进程集 -> calico-node查看日志

Readiness probe failed: Threshold time for bird readiness check: 30s calico/node is not ready: BIRD is not ready: BGP not established with 192.168.0.8,192.168.16.8,192.168.32.82019-06-24 08:00:46.917 [INFO][3559] readiness.go 88: Number of node(s) with BGP peering established = 0
Get https://192.168.1.8:10250/containerLogs/kube-system/calico-node-9zj58/calico-node?tailLines=5000&timestamps=true: dial tcp 192.168.1.8:10250: connect: no route to host

原因分析

Pod CIDR与节点IP冲突,Calico的Pod CIDR--pod-network-cidr默认使用的是192.168.0.0/16,而当集群节点的IP段也为192.168.0.0/16时,必然导致IP段冲突

Kubernetes 为容器和容器之间的通信建立了一层特殊的 Overlay 网络。使用隔离的 Pod 网络,容器可以获得唯一的 IP 并且可以避免集群上的端口冲突,我们可以查看更多关于Kubernetes 网络模型的一些信息。

当Pod子网和主机网络出现冲突的情况下就会出现问题。节点与节点,Pod与Pod之间通信会因为路由问题被中断。仔细检查网络设置,确保Pod CIDRVLANVPC之间不会有重叠。如果有冲突的,我们可以在CNI插件或kubelet的pod-cidr参数中指定 IP 地址范围,避免冲突。

解决方案

重新配置Calico的Pod CIDR

[root@k8s-master ~]# vim calico.yaml

# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
  #value: "192.168.0.0/16"
  value: "172.16.0.0/16"

[root@k8s-master ~]# kubectl delete -f calico.yaml
[root@k8s-master ~]# kubectl apply -f calico.yaml

防火墙

若仍有问题,则关闭防火墙

iptables --flush
iptables -tnat --flush
systemctl stop firewalld
systemctl disable firewalld
systemctl restart docker

集群信息

[root@k8s-master ~]# kubectl get pods -n kube-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
calico-node-gd886                      2/2     Running   0          5h11m   192.168.1.8    k8s-node01   <none>           <none>
calico-node-j2pkz                      2/2     Running   0          5h11m   192.168.16.8   k8s-node02   <none>           <none>
calico-node-ljp45                      2/2     Running   4          5h11m   192.168.32.8   k8s-node03   <none>           <none>
calico-node-lqk9n                      2/2     Running   0          124m    192.168.36.8   k8s-node04   <none>           <none>
calico-node-nb7sq                      2/2     Running   2          5h11m   192.168.0.8    k8s-master   <none>           <none>
calico-node-p6xqp                      2/2     Running   0          116m    192.168.80.8   k8s-node05   <none>           <none>
coredns-5c98db65d4-5tmc4               1/1     Running   14         3d      172.16.0.65    k8s-master   <none>           <none>
coredns-5c98db65d4-9srmt               1/1     Running   14         3d      172.16.0.66    k8s-master   <none>           <none>
etcd-k8s-master                        1/1     Running   16         3d      192.168.0.8    k8s-master   <none>           <none>
kube-apiserver-k8s-master              1/1     Running   18         3d      192.168.0.8    k8s-master   <none>           <none>
kube-controller-manager-k8s-master     1/1     Running   15         3d      192.168.0.8    k8s-master   <none>           <none>
kube-proxy-b427r                       1/1     Running   9          23h     192.168.32.8   k8s-node03   <none>           <none>
kube-proxy-pbnbv                       1/1     Running   6          23h     192.168.1.8    k8s-node01   <none>           <none>
kube-proxy-pd748                       1/1     Running   0          124m    192.168.36.8   k8s-node04   <none>           <none>
kube-proxy-smwl4                       1/1     Running   10         23h     192.168.16.8   k8s-node02   <none>           <none>
kube-proxy-vswzq                       1/1     Running   14         3d      192.168.0.8    k8s-master   <none>           <none>
kube-proxy-wps99                       1/1     Running   0          116m    192.168.80.8   k8s-node05   <none>           <none>
kube-scheduler-k8s-master              1/1     Running   16         3d      192.168.0.8    k8s-master   <none>           <none>
kubernetes-dashboard-b79c76866-2hjcj   1/1     Running   10         27h     172.16.0.64    k8s-master   <none>           <none>

版权声明:
作者:Joe.Ye
链接:https://www.appblog.cn/index.php/2023/03/26/communication-between-k8s-cluster-nodes-no-route-to-host/
来源:APP全栈技术分享
文章版权归作者所有,未经允许请勿转载。

THE END
分享
二维码
打赏
海报
K8s集群节点间通信no route to host
现象描述 节点及Master节点网络时通时不通,相互ping时通时不通 [root@k8s-node01 ~]# ping 192.168.0.8 connect: No route to host [root@k8s-master ~]# pi……
<<上一篇
下一篇>>
文章目录
关闭
目 录