【问题标题】:Single node Microk8s multus master interface cannot be reached单节点Microk8s多主接口无法访问
【发布时间】:2021-11-17 20:22:42
【问题描述】:

我有一个带印花布的单节点 Microk8s。 我已经成功部署了 Multus,并且可以在 pod 中成功创建第二个网络接口来创建 POD,因为可以看到接口和正确分配的 IP 地址。 Pod 可以在第二个接口上成功地相互访问,但我无法从 Pod 访问主机 eno8(IP 地址 10.128.1.244),即多主接口。我也无法从外面到达豆荚。

对这种部署不熟悉,需要帮助找出问题所在?

谢谢。

以下是有关我的环境的一些详细信息:

ubuntu@test:$ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
test   Ready    <none>   9d    v1.21.4-3+e5758f73ed2a04

Ip a on HOST
ubuntu@test:$ip a
8: eno8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 3c:ec:ef:6c:2c:ff brd ff:ff:ff:ff:ff:ff
    inet 10.128.1.244/24 brd 10.128.1.255 scope global eno8
       valid_lft forever preferred_lft forever
    inet6 fe80::3eec:efff:fe6c:2cff/64 scope link 
       valid_lft forever preferred_lft forever

ubuntu@test:$ kubectl get pods --all-namespaces | grep -i multus
kube-system          kube-multus-ds-amd64-dz42s                1/1     Running   0          175m

Network Deployment:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: test-network
spec:
  config: '{
    "cniVersion": "{{ .Values.Multus_cniVersion}}",
    "name": "test-network",
    "type": "{{ .Values.Multus_driverType}}",
    "master": "{{ .Values.Multus_master_interface}}",
    "mode": "{{ .Values.Multus_interface_mode}}",
    "ipam": {
      "type": "{{ .Values.Multus_ipam_type}}",
      "subnet": "{{ .Values.Multus_ipam_subnet}}",
      "rangeStart": "{{ .Values.Multus_ipam_rangeStart}}",
      "rangeEnd": "{{ .Values.Multus_ipam_rangeStop}}",
      "routes": [
        { "dst": "{{ .Values.Multus_defaultRoute}}" }
      ],
      "dns": {"nameservers": ["{{ .Values.Multus_DNS}}"]},
      "gateway": "{{ .Values.Multus_ipam_gw}}"
    }
  }'

Multus_cniVersion: 0.3.1
Multus_driverType: macvlan
Multus_master_interface: eno8
Multus_interface_mode: bridge
Multus_ipam_type: host-local
Multus_ipam_subnet: 10.128.1.0/24
Multus_ipam_rangeStart: 10.128.1.147
Multus_ipam_rangeStop: 10.128.1.156
Multus_defaultRoute: 0.0.0.0/0
Multus_DNS: 10.128.1.1
Multus_ipam_gw: 10.128.1.1

ubuntu@test:$ kubectl get network-attachment-definitions
NAME         AGE
test-network   8m39s

Network description:

ubuntu@test:$ kubectl describe network-attachment-definitions.k8s.cni.cncf.io test-network 
Name:         test-network
Namespace:    default
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: test-demo
              meta.helm.sh/release-namespace: default
API Version:  k8s.cni.cncf.io/v1
Kind:         NetworkAttachmentDefinition
Metadata:
  Creation Timestamp:  2021-09-24T12:15:08Z
  Generation:          1
  Managed Fields:
    API Version:  k8s.cni.cncf.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:meta.helm.sh/release-name:
          f:meta.helm.sh/release-namespace:
        f:labels:
          .:
          f:app.kubernetes.io/managed-by:
      f:spec:
        .:
        f:config:
    Manager:         Go-http-client
    Operation:       Update
    Time:            2021-09-24T12:15:08Z
  Resource Version:  1062851
  Self Link:         /apis/k8s.cni.cncf.io/v1/namespaces/default/network-attachment-definitions/test-network
  UID:               c96f3a0f-b30f-4972-9271-6b2871adf299
Spec:
  Config:  { "cniVersion": "0.3.1", "name": "test-network", "type": "macvlan", "master": "eno8", "mode": "bridge", "ipam": { "type": "host-local", "subnet": "10.128.1.0/24", "rangeStart": "10.128.1.147", "rangeEnd": "10.128.1.156", "routes": [ { "dst": "0.0.0.0/0" } ], "dns": {"nameservers": ["10.128.1.1"]}, "gateway": "10.128.1.1" } }
Events:    <none>



ip a in POD

root@test-deployment-6465bdfccc-k2sst:# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if505: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default 
    link/ether 22:a8:17:13:35:39 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.1.19.149/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20a8:17ff:fe13:3539/64 scope link 
       valid_lft forever preferred_lft forever
4: eth1@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether de:c1:d7:67:08:93 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.128.1.149/24 brd 10.128.1.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::dcc1:d7ff:fe67:893/64 scope link 
       valid_lft forever preferred_lft forever

Ping to eno8 in POD
root@test-deployment-6465bdfccc-g8bd4:# ping 10.128.1.244
PING 10.128.1.244 (10.128.1.244) 56(84) bytes of data.
^X^C
--- 10.128.1.244 ping statistics ---
14 packets transmitted, 0 received, 100% packet loss, time 13313ms

Ping to multus gateway

root@test-deployment-6465bdfccc-k2sst:# ping 10.128.1.1
PING 10.128.1.1 (10.128.1.1) 56(84) bytes of data.
From 10.128.1.149 icmp_seq=1 Destination Host Unreachable
From 10.128.1.149 icmp_seq=2 Destination Host Unreachable
From 10.128.1.149 icmp_seq=3 Destination Host Unreachable
From 10.128.1.149 icmp_seq=4 Destination Host Unreachable
From 10.128.1.149 icmp_seq=5 Destination Host Unreachable
From 10.128.1.149 icmp_seq=6 Destination Host Unreachable
^C
--- 10.128.1.1 ping statistics ---
8 packets transmitted, 0 received, +6 errors, 100% packet loss, time 7164ms
pipe 4


Netstat in the POD
root@test-deployment-6465bdfccc-k2sst:# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         169.254.1.1     0.0.0.0         UG        0 0          0 eth0
10.128.1.0      0.0.0.0         255.255.255.0   U         0 0          0 eth1
169.254.1.1     0.0.0.0         255.255.255.255 UH        0 0          0 eth0

ip r in the POD
root@test-deployment-6465bdfccc-g8bd4:# ip r
default via 169.254.1.1 dev eth0 
10.128.1.0/24 dev eth1 proto kernel scope link src 10.128.1.149 
169.254.1.1 dev eth0 scope link 

【问题讨论】:

    标签: kubernetes microk8s


    【解决方案1】:

    您的问题可能源于无法从同一主机的默认路由接口访问 MACVLAN 接口这一事实。假设您的 PC 有接口 eth0 和 IP 10.0.0.2,并且您使用 MACVLAN 将容器中的接口映射为父接口 eth0 或子接口 eth0.1 等,通过使用 IP 10.0.0.3。您将无法从同一主机访问在 10.0.0.3 上运行的服务,但您可以从另一台主机访问。要解决此问题,请在第 3 层模式下使用 IPVLAN 以获得完全可路由的平面。请注意,您不能进行端口转发来访问容器,因为 MACVLAN 将较低层上的通信分开或使用具有中继模式802.1q 的子接口(但您需要在端口上支持混杂模式的交换机才能能够传递带有 VLAN 标记的流量)。

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2020-03-26
      • 2020-09-02
      • 2020-09-09
      • 2020-11-13
      • 2023-04-01
      • 1970-01-01
      • 2014-05-03
      • 2023-03-24
      相关资源
      最近更新 更多