K爷 DevOps视角
服务器部署
配置为4C4G 20G磁盘
# uname -r4.15.0-112-generic# cat /etc/issueUbuntu 18.04.4 LTS \n \l
ha-node1与ha-node2安装keepalived与haproxy,虚拟IP为 172.16.1.200,作为apiserver的vip。ha-node1作为master节点,ha-node2作为backup节点。
harbor-node1部署harbor服务,用于存放镜像。
以上两个服务暂不做详细部署。
基础环境配置时间同步
# crontab -e* * * * * /usr/sbin/ntpdate time1.aliyun.com
配置hosts# vim /etc/hosts172.16.1.30 k8s-master1172.16.1.31 k8s-master2172.16.1.32 k8s-master3172.16.1.33 k8s-node1172.16.1.34 k8s-node2172.16.1.35 k8s-node3172.16.1.38 harbor.kevin.com
设置ipv4.forword# cat /proc/sys/net/ipv4/ip_forward0# echo 1 > /proc/sys/net/ipv4/ip_forward/etc/sysctl.conf# vim /etc/sysctl.confnet.ipv4.ip_forward = 1# sysctl -pnet.ipv4.ip_forward = 1
关闭swap# swapoff -a# free -m total used free shared buff/cache availableMem: 3921 451 2436 25 1033 3199Swap: 0 0 0
关闭selinux与防火墙# getenforceDisabled# systemctl stop ufw# systemctl disable ufw
安装docker# apt-get update# apt-get -y install apt-transport-https ca-certificates curl software-properties-common# curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -# add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"# apt-get -y install docker-ce docker-ce-cli# IP=$(/sbin/ifconfig |grep -E "172.16"| grep netmask | awk '{print $2}' | awk '{print $1}')# sed -i "s#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock#ExecStart=/usr/bin/dockerd -H ${IP} -H unix:///var/run/docker.sock --insecure-registry harbor.kevin.com#g" /lib/systemd/system/docker.service# systemctl daemon-reload# systemctl restart docker# systemctl enable docker
部署k8s安装kubeadm
把k8s-master1作为kubeamd控制节点
# apt-get update && apt-get install -y apt-transport-https# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -# cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial mainEOF# apt-get update# apt-get install -y kubelet kubeadm kubectl
master部署指定版本在三个master节点执行
# apt install kubeadm=1.17.2-00 kubectl=1.17.2-00 kubelet=1.17.2-00
node安装指定版本在三个node节点执行
# apt install kubeadm=1.17.2-00 kubectl=1.17.2-00 kubelet=1.17.2-00
初始化master节点在三台master 中任意一台master 进行集群初始化,而且集群初始化只需要初始化一次、
验证kubeadm版本# kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:27:49Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
准备镜像#kubeadm config images list --kubernetes-version v1.17.2 #查看所需镜像
镜像可能无法从k8s.gcr.io下载,可以从阿里云先把镜像下载下来。在3个master节点进行如下操作
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.2# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.2# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
高可用部署master实现 三台 k8s master 基于 V IP 实现高可用
# kubeadm init --apiserver-advertise-address=172.16.1.21 --apiserver-bind-port=6443 --control-plane-endpoint=172.16.1.200 --ignore-preflight-errors=swap --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version=v1.17.2 --pod-network-cidr=10.10.0.0/16 --service-cidr=192.168.1.0/20 --service-dns-domain=kevin.local
c输出结果
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root:kubeadm join 172.16.1.200:6443 --token jp6ttu.sri5a5404chzx19d \ --discovery-token-ca-cert-hash sha256:866142437b9f82577b3ddaea5da528220207a94e4cd080f6c30523e31f857f58 \ --control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.16.1.200:6443 --token jp6ttu.sri5a5404chzx19d \ --discovery-token-ca-cert-hash sha256:866142437b9f82577b3ddaea5da528220207a94e4cd080f6c30523e31f857f58
配置kube-config文件# mkdir -p $HOME/.kube# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config# chown $(id -u):$(id -g) $HOME/.kube/config# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master1 NotReady master 16m v1.17.2
当前master生成证书用于添加新的控制节点# kubeadm init phase upload-certs --upload-certsI0722 15:00:17.683581 39071 version.go:251] remote version is much newer: v1.18.6; falling back to: stable-1.17W0722 15:00:27.684724 39071 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.17.txt": Get https://dl.k8s.io/release/stable-1.17.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)W0722 15:00:27.684885 39071 version.go:102] falling back to the local client version: v1.17.2W0722 15:00:27.685185 39071 validation.go:28] Cannot validate kube-proxy config - no validator is availableW0722 15:00:27.685281 39071 validation.go:28] Cannot validate kubelet config - no validator is available[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:f77ce75b9aefb2903a617516542f365b99958113f44974e0030ccf2dbfa59307
添加新master节点# kubeadm join 172.16.1.200:6443 --token jp6ttu.sri5a5404chzx19d \--discovery-token-ca-cert-hash sha256:866142437b9f82577b3ddaea5da528220207a94e4cd080f6c30523e31f857f58 \--control-plane --certificate-key f77ce75b9aefb2903a617516542f365b99958113f44974e0030ccf2dbfa59307
此命令为kubeadm init输出结果提示输出结果
To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.
在新加入的master节点执行
# mkdir -p $HOME/.kube# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config# chown $(id -u):$(id -g) $HOME/.kube/config# kubectl get nodes NAME STATUS ROLES AGE VERSIONk8s-master1 NotReady master 31m v1.17.2k8s-master2 NotReady master 91s v1.17.2
同理把剩下的master节点加入进来即可
过一段时间后进行查看
# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master1 Ready master 52m v1.17.2k8s-master2 Ready master 22m v1.17.2k8s-master3 Ready master 19m v1.17.2
部署网络组件以flannel为例
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml# vim kube-flannel.ymlnet-conf.json: | { "Network": "10.10.0.0/16", "Backend": { "Type": "vxlan" } }# kubectl apply -f kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds-amd64 createddaemonset.apps/kube-flannel-ds-arm64 createddaemonset.apps/kube-flannel-ds-arm createddaemonset.apps/kube-flannel-ds-ppc64le createddaemonset.apps/kube-flannel-ds-s390x created
添加node节点在各node节点执行如下命令
# kubeadm join 172.16.1.200:6443 --token jp6ttu.sri5a5404chzx19d \ --discovery-token-ca-cert-hash sha256:866142437b9f82577b3ddaea5da528220207a94e4cd080f6c30523e31f857f58
此命令为kubeadm init输出结果提示
node 节点 会 自动 加入 到 master 节点 下载镜像并启动 flannel ,直到 最终 在 master 看到 node 处于 Ready 状态
在master节点查看
# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master1 Ready master 70m v1.17.2k8s-master2 Ready master 40m v1.17.2k8s-master3 Ready master 36m v1.17.2k8s-node1 Ready <none> 9m42s v1.17.2k8s-node2 Ready <none> 4m40s v1.17.2k8s-node3 Ready <none> 38s v1.17.2
验证网路# kubectl run net-test --image=alpine --replicas=1 sleep 3600# kubectl get podNAME READY STATUS RESTARTS AGEnet-test-7bd9fb9d89-g2kc7 1/1 Running 0 9s# kubectl exec -it net-test-7bd9fb9d89-g2kc7 sh/ # ifconfigeth0 Link encap:Ethernet HWaddr BA:2E:64:C9:1B:E8 inet addr:10.10.4.4 Bcast:10.10.4.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:668 (668.0 B) TX bytes:42 (42.0 B)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)/ # ping 172.16.1.29PING 172.16.1.29 (172.16.1.29): 56 data bytes64 bytes from 172.16.1.29: seq=0 ttl=63 time=1.397 ms64 bytes from 172.16.1.29: seq=1 ttl=63 time=0.646 ms64 bytes from 172.16.1.29: seq=2 ttl=63 time=0.715 ms^C--- 172.16.1.29 ping statistics ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 0.646/0.919/1.397 ms/ # ping www.baidu.comPING www.baidu.com (61.135.169.121): 56 data bytes64 bytes from 61.135.169.121: seq=0 ttl=127 time=4.559 ms64 bytes from 61.135.169.121: seq=1 ttl=127 time=4.181 ms^C--- www.baidu.com ping statistics ---2 packets transmitted, 2 packets received, 0% packet lossround-trip min/avg/max = 4.181/4.370/4.559 ms
安装dashboard可以在k8s-master1上进行操作
- 下载所需镜像
# docker pull kubernetesui/dashboard:v2.0.0-rc6# docker pull kubernetesui/metrics-scraper:v1.0.3
- 推送到harbor
# docker tag kubernetesui/dashboard:v2.0.0-rc6 harbor.kevin.com/base/dashboard:v2.0.0-rc6# docker tag kubernetesui/metrics-scraper:v1.0.3 harbor.kevin.com/base/metrics-scraper:v1.0.3# docker push harbor.kevin.com/base/dashboard:v2.0.0-rc6# docker push harbor.kevin.com/base/metrics-scraper:v1.0.3
- 部署dashboard
# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml -Odashboard-2.0.0-rc6.yml# vim dashboard-2.0.0-rc6.ymlkind: ServiceapiVersion: v1metadata:labels: k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardspec:type: NodePortports: - port: 443 targetPort: 8443 nodePort: 30002selector: k8s-app: kubernetes-dashboard...... spec: containers: - name: kubernetes-dashboard image: harbor.kevin.com/base/dashboard:v2.0.0-rc6 imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP spec: containers: - name: dashboard-metrics-scraper image: harbor.kevin.com/base/metrics-scraper:v1.0.3 ports: - containerPort: 8000 protocol: TCP# vim admin-user.ymlapiVersion: v1kind: ServiceAccountmetadata:name: admin-usernamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: admin-userroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects:- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard# kubectl apply -f dashboard-2.0.0-rc6.yml# kubectl apply -f admin-user.yml# kubectl get service -A|grep kubernetes-dashboardkubernetes-dashboard dashboard-metrics-scraper ClusterIP 192.168.1.105 <none> 8000/TCP 118skubernetes-dashboard kubernetes-dashboard NodePort 192.168.15.118 <none> 443:30002/TCP 118s
此时在浏览器上输入http://<nodeip>:30002访问,页面提示Client sent an HTTP request to an HTTPS server.。通过https://<nodeip>:30002访问。提示需要token
- 获取token
# kubectl get secret -A |grep admin-userkubernetes-dashboard admin-user-token-f2xv8 kubernetes.io/service-account-token 3 11s# kubectl describe secret admin-user-token-f2xv8 -n kubernetes-dashboardName: admin-user-token-f2xv8Namespace: kubernetes-dashboardLabels: <none>Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 1511a4cb-3b24-4530-b244-3a72a9a943a5Type: kubernetes.io/service-account-tokenData====ca.crt: 1025 bytesnamespace: 20 bytestoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IkVaalJWdlEtMnU2dVI0WVpUY3ZhUUlmbHNWMll1bndmRldNbkFWMVlCYVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWYyeHY4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxNTExYTRjYi0zYjI0LTQ1MzAtYjI0NC0zYTcyYTlhOTQzYTUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.pgd6iBYQqfttCxkiINOJ84TdQlTMZqWBflZDmjX8iqDN3fdb7akMtVTFj5Gl1CPvt9Lyu82XGm2OsZZW-MchN64cSltbCRGZa6YvrMdGlkbQ9ey0-7pEz4G08BALi6ikgawV2dcwDh7mUvns66sHQLcEs1jk6ju9CsMjdz9i5aJ3jqvnA01pInFf1vG9-wvsKVl-nX3JVYW4yenRYGxIiJZwgXIpwIhMlmAg32CqUkZwhSiE23xRFFkmL68EHu8WuuP8lx7hfpFI8-Bd1yml0ugU7uAq0ltInYzmRRer62EU8DRzBi2dPv4kc55bDaYQf7ts-LBJYmydtYmcPznbjg
将token贴到页面即可访问