1、配置阿里云docker源,安装组件,安装docker-ce(所有节点)
yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install docker-ce-19.03.14
2、配置阿里云kubernetes源,安装kubeadm,kubectl,kubelet(kubeadm,kubelet在work节点,kubectl在master节点)
[root@master manifests]# cat /etc/yum.repos.d/kubernetes.repo [kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg yum install -y kubelet yum install -y kubectl yum install -y kubeadm
3、安装完成后,我们还需要对docker进行配置,因为用yum源的方式安装的kubelet生成的配置文件将参数--cgroup-driver改成了systemd,而 docker 的cgroup-driver是cgroupfs,这二者必须一致才行,我们可以通过docker info命令查看:
[root@master manifests]# docker info |grep Cgroup Cgroup Driver: systemd
修改
[root@master manifests]# cat /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ]}
4、关闭swap
swapoff -a
5、在master节点提前下载好k8s组件镜像,再使用docker tag重命名,因为K8s默认是从grc.io下载镜像的,而我本地无法访问
查看需要的镜像
[root@master ~]# kubeadm config images listk8s.gcr.io/kube-apiserver:v1.20.2k8s.gcr.io/kube-controller-manager:v1.20.2k8s.gcr.io/kube-scheduler:v1.20.2k8s.gcr.io/kube-proxy:v1.20.2k8s.gcr.io/pause:3.2k8s.gcr.io/etcd:3.4.13-0k8s.gcr.io/coredns:1.7.0
从dockerhub可以下载的地址下载镜像
docker pull aiotceo/kube-apiserver:v1.20.2 docker pull aiotceo/kube-controller-manager:v1.20.2 docker pull aiotceo/kube-proxy:v1.20.2 docker pull aiotceo/kube-scheduler:v1.20.2 docker pull aiotceo/pause:3.2 docker pull aiotceo/coredns:1.7.0 docker pull aiotceo/etcd:3.4.13-alpine docker pull aiotceo/etcd:3.4.13-ubuntu
重命名
1109 docker tag docker.io/aiotceo/kube-apiserver:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 1111 docker tag aiotceo/kube-proxy:v1.20.2 k8s.gcr.io/kube-proxy:v1.20.2 1112 docker tag aiotceo/kube-scheduler:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 1113 docker tag docker.io/aiotceo/pause:3.2 k8s.gcr.io/pause:3.2 1114 docker tag docker.io/aiotceo/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0 1115 docker tag docker.io/aiotceo/etcd:3.4.13-ubuntu k8s.gcr.io/etcd:3.4.13-0
6、集群初始化kubeadm init --kubernetes-version=v1.20.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.1.1.206
初始化完成后你会看到下面这样的提示:
You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.1.1.206:6443 --token hr8pxx.x7hnskwkz7dp20tq \ --discovery-token-ca-cert-hash sha256:a894653ab32c92d89a4a43f6486bbe7cfbbeee1e601b5b3a99ffdcd68367737b
意思说你可以使用kubeadm join加入本集群,但是需要安装网络插件
此时使用kubectl get cs查看集群状态,会有以下提示:
[root@master manifests]# kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME STATUS MESSAGE ERRORscheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"}
这是因为集群初始化默认从官方下载的kube-scheduler和kube-controller-manaer配置文件里面的port端口为0,将--port=0注释掉,然后重启Kubelet
...spec: containers: - command: - kube-scheduler - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf - --bind-address=127.0.0.1 - --kubeconfig=/etc/kubernetes/scheduler.conf - --leader-elect=true# - --port=0 image: k8s.gcr.io/kube-scheduler:v1.20.2...systemctl restart kubelet
此时再查看cs,状态都为ready
[root@master manifests]# kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
7、安装网络插件
下载flannel配置文件,同样需要提前在worker节点下载kube-proxy镜像,并重命名
`wget https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml`
注意,需要在浏览器里面直接打开然后复制内容,再创建kube-flannel.yml,否则用kubectl apply -f 这个链接会报错
kubectl apply -f kube-flannel.yml
查看
kubectl -n kube-system get pod[root@master ~]# kubectl -n kube-system get podNAME READY STATUS RESTARTS AGEetcd-master 1/1 Running 0 71mkube-apiserver-master 1/1 Running 0 71mkube-controller-manager-master 1/1 Running 0 71mkube-flannel-ds-5xsbn 1/1 Running 0 34mkube-flannel-ds-9dxlm 1/1 Running 0 34mkube-flannel-ds-xt564 1/1 Running 0 35m
8、工作节点加入集群
分别在工作节点使用上面初始化的生成的token和hash值加入集群
kubeadm join 10.1.1.206:6443 --token hr8pxx.x7hnskwkz7dp20tq \ --discovery-token-ca-cert-hash sha256:a894653ab32c92d89a4a43f6486bbe7cfbbeee1e601b5b3a99ffdcd68367737b
查看
[root@master ~]# kubectl get csrNAME AGE SIGNERNAME REQUESTOR CONDITIONcsr-c4tqr 78m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:hr8pxx Approved,Issuedcsr-pdsnh 78m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:hr8pxx Approved,Issued[root@master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster Ready control-plane,master 86m v1.20.2node01 Ready <none> 78m v1.20.2node02 Ready <none> 78m v1.20.2
9、创建coredns
提前将镜像下载好,并重命名
docker pull aiotceo/coredns:1.7.0docker tag docker.io/aiotceo/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0[root@master ~]# kubectl -n kube-system get pod NAME READY STATUS RESTARTS AGEcoredns-74ff55c5b-ckd4v 1/1 Running 0 90mcoredns-74ff55c5b-pgw2h 1/1 Running 0 90m
PS:安装过程中如果有出错,使用以下命令重置,并且重新初始化,worker节点也一样
kubeadm resetsystemctl daemon-reloadsystemctl restart kubeletiptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X