参考: http://docs.kubernetes.org.cn/459.html
https://blog.csdn.net/gui951753/article/details/83316976#_1
http://www.mamicode.com/info-detail-2544943.html
https://blog.csdn.net/nklinsirui/article/details/80581286#debian-ubuntu
https://cloud.tencent.com/info/852b3e3d1ad1bc020eaacd3bef724443.html
1. 规划
| IP | 节点角色 | 工作职责 |
|---|---|---|
| 172.10.30.100 | master | 对外暴露API,对内提供工作流的调度和配置 |
| 172.10.30.101 | node1 | 承载着k8s运行的实际任务 |
| 172.10.30.102 | node2 | 同node1相同 |
2. 部署前提
- 主机名称解析,(在/etc/hosts文件编辑相关信息即可)
172.10.30.100 master
172.10.30.101 node1
172.10.30.102 node2
将上述配置文件拷贝到集群中的所有节点,包括master节点和node节点。
- 时间同步(使用chrony服务实现)
yum -y install chrony vim /etc/chrony.conf ~~~ server master #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst ~~~~ #注释掉原有的server内容,把原有的时钟同步服务设置为master结点上的时钟同步。
- 关闭所有节点的iptables和firewalld以及selinux
iptables -F
systemctl stop firewalld
systemctl disable firewalld
sed -i '/^SELINUX=/s/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
-
禁止iptables对bridge数据进行处理
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF sysctl -p /etc/sysctl.conf # 不起作用
sysctl -p /etc/sysctl.d/k8s.conf #这样可以
- 关闭swap
swapoff -a sed 's/.*swap.*/#&/' /etc/fstab
3. docker 安装
参考: https://docs.docker.com/install/linux/docker-ce/centos/#uninstall-old-versions
这里安装特定的版本 docker-ce-18.06。
需要注意的是,Kubernetes 1.13已经针对Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本做了验证,最低支持的Docker版本是1.11.1,最高支持是18.06,而Docker最新版本已经是18.09了,故我们安装时需要指定版本为18.06.1-ce:
#移除旧版docker
yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-selinux \ docker-engine-selinux \ docker-engine #依赖 yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 #docke-ce 官方yum 源 yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
#阿里的docke-ce yum源 二者用其一
wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #从阿里云获取docker-ce的镜像文件,-P指定下载文件存放的目录 #查看docker版本 yum list docker-ce --showduplicates | sort -r
#安装 yum install docker-ce-18.06.1.ce-3.el7 -y
systemctl start docker
systemctl enable docker
4. kubectl、kubelete、kubeadm安装
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF setenforce 0
yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
5. 使用kubeadm创建一个单Master集群
5.1初始化Master节点
K8s的控制面板组件运行在Master节点上,包括etcd和API server(Kubectl便是通过API server与k8s通信)。
在执行初始化之前,我们还有一下3点需要注意:
1.选择一个网络插件,并检查它是否需要在初始化Master时指定一些参数,比如我们可能需要根据选择的插件来设置--pod-network-cidr参数。参考:Installing a pod network add-on。
2.kubeadm使用eth0的默认网络接口(通常是内网IP)做为Master节点的advertise address,如果我们想使用不同的网络接口,可以使用--apiserver-advertise-address=<ip-address>参数来设置。如果适应IPv6,则必须使用IPv6d的地址,如:--apiserver-advertise-address=fd00::101。
3.使用kubeadm config images pull来预先拉取初始化需要用到的镜像,用来检查是否能连接到Kubenetes的Registries。
Kubenetes默认Registries地址是k8s.gcr.io,很明显,在国内并不能访问gcr.io,因此在kubeadm v1.13之前的版本,安装起来非常麻烦,但是在1.13版本中终于解决了国内的痛点,其增加了一个--image-repository参数,默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers,其它的就可以完全按照官方文档来愉快的玩耍了。
其次,我们还需要指定--kubernetes-version参数,因为它的默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(最新版:v1.13.0)来跳过网络请求。
现在,我们就来试一下:
# 使用calico网络 --pod-network-cidr=192.168.0.0/16 sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.0 --pod-network-cidr=192.168.0.0/16
输出
[init] Using Kubernetes version: v1.13.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.10.30.100 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.10.30.100 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.10.30.100] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 26.004391 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 485yy9.azkhnftmz2mf9me5 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.10.30.100:6443 --token 485yy9.azkhnftmz2mf9me5 --discovery-token-ca-cert-hash sha256:165d19adeaac9bd84837367b414a45b01879dbb8f36092a32b957223904e9c30