K8S 部署

K8S 部署Kubernetes 1.24 于2022 年 5 月 3 日正式发布,新版本中, 优化12 项功能都更新到了稳定版本,心中StatefulSe

欢迎大家来到IT世界,在知识的湖畔探索吧!

本文采用kubeadm方式部署Kubernetes 1.24.6,后期也将发布二进制方式部署。

一、Kubernetes1.24版本发布及改动

1.1 Kubernetes 1.24发布

k8s 1.24 于2022 年 5 月 3 日正式发布,新版本中优化了12 项功能并更新到了稳定版本,StatefulSets 支持批量滚动,NetworkPolicy新增 NetworkPolicyStatus 字段方便进行故障排查等。

1.2 Kubernetes 1.24 改动

Kubernetes v1.24移除了对docker-shim的支持,需要安装cri-dockerd,用于为Docker Engine提供一个能够支持到CRI规范的垫片,从而能够让Kubernetes基于CRI控制Docker 。

二、Kubernetes 1.24.6集群部署

2.1 Kubernetes 1.24.6集群部署环境准备

2.1.1 主机操作系统说明

本文档选用ubuntu 18.04.1,建议升级内核5.4以上

root@k8s-master01:~# uname -a
Linux k8s-master01 5.4.0-112-generic #126~18.04.1-Ubuntu SMP Wed May 11 15:57:56 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

欢迎大家来到IT世界,在知识的湖畔探索吧!

2.1.2 主机硬件配置说明

角色

IP

主机名

CPU

内存

硬盘

master

11.0.1.21

k8s-master01

2C

4G

50GB

worker

11.0.1.31

k8s-node01

2C

4G

50GB

worker

11.0.1.32

k8s-node02

2C

4G

50GB

2.1.3 主机配置

2.1.3.1 主机名配置

由于本次使用3台主机完成kubernetes集群部署,其中1台为master节点,名称为k8s-master01;其中2台为worker节点,名称分别为:k8s-node01及k8s-node02

欢迎大家来到IT世界,在知识的湖畔探索吧!master节点
# hostnamectl set-hostname k8s-master01
worker01节点
# hostnamectl set-hostname k8s-node01
欢迎大家来到IT世界,在知识的湖畔探索吧!worker02节点
# hostnamectl set-hostname k8s-node02

2.1.3.2 主机IP地址配置

k8s-master01节点IP地址为:11.0.1.21/24
root@master01:/opt# vim /etc/netplan/00-installer-config.yaml 
# This is the network config written by 'subiquity'
network:
  ethernets:
    ens33:
      addresses:
      - 11.0.1.21/24
      gateway4: 11.0.1.2
      nameservers:
        addresses:
        - 223.5.5.5
        search: []
  version: 2
node01节点IP地址为:11.0.1.31/24
# vim /etc/netplan/00-installer-config.yaml 
#  This is the network config written by 'subiquity'
network:
  ethernets:
    ens33:
      addresses:
      - 11.0.1.31/24
      gateway4: 11.0.1.2
      nameservers:
        addresses:
        - 223.5.5.5
        search: []
  version: 2
node02节点IP地址为:11.0.1.32/24
# vim /etc/netplan/00-installer-config.yaml 
 This is the network config written by 'subiquity'
network:
  ethernets:
    ens33:
      addresses:
      - 11.0.1.32/24
      gateway4: 11.0.1.2
      nameservers:
        addresses:
        - 223.5.5.5
        search: []
  version: 2

2.1.3.3 主机名与IP地址解析

所有集群主机均需要进行配置。

# cat /etc/hosts
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
11.0.1.21 k8s-master01
11.0.1.31 k8s-node01
11.0.1.32 k8s-node02

2.1.3.4 时间同步配置

所有主机均需要操作。最小化安装系统需要安装ntpdate软件。

root@k8s-master01:~# crontab -l
0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com

2.1.3.5 内核调整

所有主机均需要操作。

添加网桥过滤及内核转发配置文件
# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
临时加载br_netfilter模块
modprobe overlay
modprobe br_netfilter
永久性加载模块
root@k8s-master01:~#cat > /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF
查看是否加载
root@k8s-master01:~#lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter

2.1.3.6 安装ipset及ipvsadm

所有主机均需要操作。

安装ipset及ipvsadm
root@k8s-master01:~#apt -y install ipset ipvsadm
配置ipvsadm模块加载方式
添加需要加载的模块
root@k8s-master01:~#mkdir -p /etc/sysconfig/modules
root@k8s-master01:~#cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
授权、运行、检查是否加载
root@k8s-master01:~# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

2.1.3.7 关闭SWAP分区

修改完成后需要重启操作系统,如不重启,可临时关闭,命令为swapoff -a

root@k8s-master01:~#swapoff -a    、、临时关闭
root@k8s-master01:~#sed -i '/swap/s/^/#/' /etc/fstab     、、永远关闭swap分区,需要重启操作系统

root@k8s-master01:~# cat /etc/fstab
......
# /dev/mapper/centos-swap swap                    swap    defaults        0 0
在上一行中行首添加#

2.1.4 配置docker和kubernetes源

配置docker和kubernetes源修改为aliyun源

k8s源:
root@k8s-master01:~#apt-get update && apt-get install -y apt-transport-https
root@k8s-master01:~#curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
root@k8s-master01:~#cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
root@k8s-master01:~#apt-get update -y

docker源:
root@k8s-master01:~#apt-get -y install apt-transport-https ca-certificates curl software-properties-common
root@k8s-master01:~#curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
root@k8s-master01:~#add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
root@k8s-master01:~#apt-get -y update

2.1.5 安装docker

root@k8s-master01:~#atp install -y docker-ce=5:20.10.18~3-0~ubuntu-bionic 

root@k8s-master01:~#systemctl enable --now docker
在/etc/docker/daemon.json添加如下内容
root@k8s-master01:~# cat /etc/docker/daemon.json
{
        "exec-opts": ["native.cgroupdriver=systemd"]
}

root@k8s-master01:~#systemctl daemon-reload
root@k8s-master01:~#systemctl restart docker

2.1.5.1 安装cri-dockerd安装

下载cri-dockered
root@k8s-master01:~#curl -LO https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cridockerd_0.2.5.3-0.ubuntu-focal_amd64.deb
安装
root@k8s-master01:~#dpkg -i cri-dockerd_0.2.5.3-0.ubuntu-focal_amd64.deb
修改启动文件

root@k8s-master01:~#vim /lib/systemd/system/cri-docker.service
#修改ExecStart行如下添加--pod-infra-container-image registry.aliyuncs.com/google_containers/pause:3.7
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image registry.aliyuncs.com/google_containers/pause:3.7
systemctl daemon-reload && systemctl restart cri-docker.service

2.1.6 kubeadm、kubelet 和 kubectl

root@k8s-master01:~#apt install -y  kubeadm=1.24.6-00 kubelet=1.24.6-00 kubectl=1.24.6-00

2.1.6.1 准备 Kubernetes 初始化所需镜像(修改国内镜像,科学上网跳过)

查看镜像:
root@k8s-master01:~# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.6
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.6
registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.6
registry.aliyuncs.com/google_containers/kube-proxy:v1.24.6
registry.aliyuncs.com/google_containers/pause:3.7
registry.aliyuncs.com/google_containers/etcd:3.5.3-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6

拉取镜像:
root@k8s-master01:~#kubeadm config images pull --kubernetes-version=v1.24.6 --node-name=k8s-master01 --image-repository registry.aliyuncs.com/google_containers --cri-socket unix:///run/cri-dockerd.sock

root@k8s-master01:~# docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.24.6   860f263331c9   2 months ago    130MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.24.6   0bb39497ab33   2 months ago    110MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.24.6   c6c20157a423   2 months ago    119MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.24.6   c786c777a4e1   2 months ago    51MB
registry.aliyuncs.com/google_containers/etcd                      3.5.3-0   aebe758cef4c   7 months ago    299MB
registry.aliyuncs.com/google_containers/pause                     3.7       221177c6082a   8 months ago    711kB
registry.aliyuncs.com/google_containers/coredns                   v1.8.6    a4ca41631cc7   13 months ago   46.8MB

注:以上操作master节点和worker节点都需要配置。

2.2 集群初始化

root@k8s-master01:~#kubeadm init --kubernetes-version=v1.24.6 --node-name=k8s-master01 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket unix:///run/cri-dockerd.sock --image-repository registry.aliyuncs.com/google_containers --upload-certs
出现以下表示成功
.............
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
...................


#如果有工作节点,先在工作节点执行,再在control节点执行下面操作
kubeadm reset -f --cri-socket unix:///run/cri-dockerd.sock
rm -rf /etc/cni/net.d/  $HOME/.kube/config

2.3 在k8s-master01 节点生成 kubectl 命令的授权文件

root@k8s-master01:~#mkdir -p $HOME/.kube
root@k8s-master01:~#cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master01:~#chown $(id -u):$(id -g) $HOME/.kube/config
root@k8s-master01:~#export KUBECONFIG=/etc/kubernetes/admin.conf

2.4 实现 kubectl 命令补全

kubectl 命令功能丰富,默认不支持命令补会,可以用下面方式实现
root@k8s-master01:~#kubectl completion bash > /etc/profile.d/kubectl_completion.sh
. /etc/profile.d/kubectl_completion.sh
exit

root@k8s-master01:~# kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-master01    NotReady     control-plane   17m   v1.24.6

2.5 安装网络插件

本次使用calico部署集群网络

2.5.1 calico安装

Calico有两种安装方式:

  • 使用calico.yaml清单文件安装(本次采用)
  • 使用Tigera Calico Operator安装Calico(官方最新指导)

2.5.1.1 使用calico.yaml清单文件安装

root@k8s-master01:~#wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
//修改镜像的地址
root@k8s-master01:~#sed -i 's#docker.io/##g' calico.yaml
root@k8s-master01:~# kubectl apply -f calico.yaml

注意污点:
若calico-kube-controllers节点一直挂起可以删除该节点后自动创建新节点

验证成功

root@k8s-master01:~# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS       AGE
calico-kube-controllers-6799f5f4b4-nrc7p   1/1     Running   13 (19h ago)   56d
calico-node-5xllg                          1/1     Running   9 (19h ago)    56d
calico-node-kjrhc                          1/1     Running   8 (19h ago)    56d
calico-node-lrnd8                          1/1     Running   1 (19h ago)    20h
coredns-74586cf9b6-dl8bz                   1/1     Running   9 (19h ago)    56d
coredns-74586cf9b6-rvzlq                   1/1     Running   9 (19h ago)    56d
etcd-k8s-master01                          1/1     Running   9 (19h ago)    56d
kube-apiserver-k8s-master01                1/1     Running   4 (19h ago)    56d
kube-controller-manager-k8s-master01       1/1     Running   15 (19h ago)   56d
kube-proxy-dl7pc                           1/1     Running   1 (19h ago)    20h
kube-proxy-nhlxp                           1/1     Running   8 (19h ago)    56d
kube-proxy-s7jv7                           1/1     Running   9 (19h ago)    56d
kube-scheduler-k8s-master01                1/1     Running   16 (19h ago)   56d

查看集群状态:

root@k8s-master01:~# kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-master01   Ready    control-plane   56d   v1.24.6

2.6 集群添worker工作节点

root@k8s-node01:~#kubeadm join 11.0.1.21:6443 --token tb80qx.ce0k28l6bhsxcdtl   --discovery-token-ca-cert-ha                                                              sh sha256:6ffda531131e163655b68f4b1a09a5d37bc490400fa9cc0f740265283edddeb3 --cri-socket unix:///run/cri-dockerd.sock

root@k8s-node02:~#kubeadm join 11.0.1.21:6443 --token tb80qx.ce0k28l6bhsxcdtl   --discovery-token-ca-cert-ha                                                              sh sha256:6ffda531131e163655b68f4b1a09a5d37bc490400fa9cc0f740265283edddeb3 --cri-socket unix:///run/cri-dockerd.sock

2.7 验证集群可用性

root@k8s-master01:~# kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-master01   Ready    control-plane   56d   v1.24.6
k8s-node01     Ready    <none>          56d   v1.24.6
k8s-node02     Ready    <none>          20h   v1.24.6

root@k8s-master01:~# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}

免责声明:本站所有文章内容,图片,视频等均是来源于用户投稿和互联网及文摘转载整编而成,不代表本站观点,不承担相关法律责任。其著作权各归其原作者或其出版社所有。如发现本站有涉嫌抄袭侵权/违法违规的内容,侵犯到您的权益,请在线联系站长,一经查实,本站将立刻删除。 本文来自网络,若有侵权,请联系删除,如若转载,请注明出处:https://itzsg.com/35872.html

(0)

相关推荐

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

联系我们YX

mu99908888

在线咨询: 微信交谈

邮件:itzsgw@126.com

工作时间:时刻准备着!

关注微信