【发布时间】:2020-11-02 02:32:54
【问题描述】:
我设置了在 2 个不同工作节点上运行的有状态 Pod,但我无法 ping 通这些 Pod。以下是池文件:
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: rack.ippool-1
spec:
cidr: 192.168.16.0/24
blockSize: 24
ipipMode: Never
natOutgoing: true
disabled: false
nodeSelector: all()
第一个 pod 上的 IP 配置
ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000
link/ether 3e:a6:cb:15:cf:1a brd ff:ff:ff:ff:ff:ff
inet 192.168.16.41/32 brd 192.168.16.41 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3ca6:cbff:fe15:cf1a/64 scope link
valid_lft forever preferred_lft forever
另一个节点上的 IP Conf
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000
link/ether 1a:3c:c1:1a:fa:03 brd ff:ff:ff:ff:ff:ff
inet 192.168.16.42/32 brd 192.168.16.42 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::183c:c1ff:fe1a:fa03/64 scope link
valid_lft forever preferred_lft forever
Ping 状态
ping 192.168.16.41
PING 192.168.16.41 (192.168.16.41) 56(84) bytes of data.
它不起作用。
我尝试了 ipipMode: Always 和 CrossSubnet 但没有任何效果。我不确定我错过了什么。另外,我不确定我何时给出块大小 24,为什么 IP 在 /32 CIDR 中。它们不是在 /24 CIDR 的范围内吗?
[root@k8master-1 ~]# calicoctl node status
Calico process is running.
None of the BGP backend processes (BIRD or GoBGP) are running.
印花布 IPam 结果
calicoctl ipam show
+----------+-----------------+-----------+------------+--------------+
| GROUPING | CIDR | IPS TOTAL | IPS IN USE | IPS FREE |
+----------+-----------------+-----------+------------+--------------+
| IP Pool | 10.244.0.0/16 | 65536 | 1 (0%) | 65535 (100%) |
| IP Pool | 192.168.16.0/24 | 256 | 3 (1%) | 253 (99%) |
+----------+-----------------+-----------+------------+--------------+
印花布ipam块
[root@k8master-1 ~]# calicoctl ipam show --show-blocks
+----------+-----------------+-----------+------------+--------------+
| GROUPING | CIDR | IPS TOTAL | IPS IN USE | IPS FREE |
+----------+-----------------+-----------+------------+--------------+
| IP Pool | 10.244.0.0/16 | 65536 | 1 (0%) | 65535 (100%) |
| Block | 10.244.0.0/26 | 64 | 1 (2%) | 63 (98%) |
| IP Pool | 192.168.16.0/24 | 256 | 3 (1%) | 253 (99%) |
| Block | 192.168.16.0/24 | 256 | 3 (1%) | 253 (99%) |
+----------+-----------------+-----------+------------+--------------+
Calico 的借用 IP 列表
[root@k8master-1 ~]# calicoctl ipam show --show-borrowed
+---------------+----------------+-----------------+-------------+------+--------------------+
| IP | BORROWING-NODE | BLOCK | BLOCK OWNER | TYPE | ALLOCATED-TO |
+---------------+----------------+-----------------+-------------+------+--------------------+
| 192.168.16.39 | k8worker-2 | 192.168.16.0/24 | | pod | default/racnode1-0 |
| 192.168.16.41 | k8worker-2 | 192.168.16.0/24 | | pod | default/racnode1-0 |
| 192.168.16.42 | k8worker-1 | 192.168.16.0/24 | | pod | default/racnode2-0 |
+---------------+----------------+-----------------+-------------+------+--------------------+
【问题讨论】:
-
你想从哪里ping?您的机器、另一个 pod、主节点等?
-
嗨,我正在尝试 ping pod 到 pod。但是,很高兴知道我们是否可以从主节点和工作节点 cidr 为 10.0.1.0/24 的子网 ping pod。
-
不,你不能从一个节点ping到一个pod,这就是为什么有k8s服务
-
你的 K8s 节点的 IP 地址范围是多少?
-
@Rico,感谢您提供意见。正如我所提到的,我正在寻找一种解决方案,当两个 pod 都在同一个子网上时,ping pod 到 pod。在我的例子中,两个 pod 都是使用 calico 网络创建的,但是在不同的工作节点上。 pod 子网为 192.168.16.0/24。
标签: kubernetes calico