热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

部署kubernetesv1.25.3(k8s)基于containerd容器运行时

文章目录前言一、准备开始二、环境配置(所有节点操作)三、安装containerd(所有节点操作)3.1、安装containe




在这里插入图片描述



文章目录


  • 前言
  • 一、准备开始
  • 二、环境配置(所有节点操作)
  • 三、安装containerd(所有节点操作)
    • 3.1、安装containerd
    • 3.2、安装runc
    • 3.3、安装CNI
    • 3.4、配置加速器

  • 四、cgroup 驱动(所有节点操作)
  • 五、安装crictl(所有节点操作)
  • 六、kubeadm部署集群
    • 6.1、安装kubeadm、kubelet、kubectl(所有节点操作)
    • 6.2、kubeadm初始化(master节点操作)
    • 6.3、部署网络(master节点操作)
      • 6.3.1、说明
      • 6.3.2、操作(calico下载)


  • 总结
  • 参考文档



前言

大家好,我是秋意临。

今日分享,kuberneter-v1.25.3版本部署(目前2022年11月最新版),由于自 1.24 版起,Dockershim 已从 Kubernetes 项目中移除,所以我们的 容器运行时(容器运行时负责运行容器的软件) 已不在是docker。本文将采用containerd作为 容器运行时

Kubernetes 中几个常见的容器运行时。(具体用法见kubernetes官方文档 )


  • containerd
  • CRI-O
  • Docker Engine
  • Mirantis Container Runtime

一、准备开始

本文操作配置,如下:


系统CPURAMIP网卡主机名
Linux24G192.168.200.5NATmaster
Linux24G192.168.200.6NATnode


最低配置:CPU核心不低于2个,RAM不低于2G。


注意命令在那台节点上执行的。


二、环境配置(所有节点操作)

修改主机名

#master节点
hostnamectl set-hostname master
bash
#node节点
hostnamectl set-hostname node
bash

配置hosts映射

cat >> /etc/hosts << EOF
192.168.200.5 master
192.168.200.6 node
EOF

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

setenforce 0
sed -i "s/SELINUX&#61;enforcing/SELINUX&#61;disabled/g" /etc/selinux/config

关闭交换分区



为了保证 kubelet 正常工作&#xff0c;必须禁用交换分区。


swapoff -a
sed -i &#39;s/.*swap.*/#&/&#39; /etc/fstab

转发 IPv4 并让 iptables 看到桥接流



为了让 Linux 节点的 iptables 能够正确查看桥接流量&#xff0c;请确认 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 设置为 1。


#转发 IPv4 并让 iptables 看到桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter
lsmod | grep br_netfilter #验证br_netfilter模块
# 设置所需的 sysctl 参数&#xff0c;参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables &#61; 1
net.bridge.bridge-nf-call-ip6tables &#61; 1
net.ipv4.ip_forward &#61; 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

配置 时间同步

#删除centos默认repo包&#xff0c;配置阿里云Centos-7.repo包
rm -rf /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
#方式1&#xff1a;安装配置chrony时间同步
IP&#61;&#96;ip addr | grep &#39;state UP&#39; -A2 | grep inet | egrep -v &#39;(127.0.0.1|inet6|docker)&#39; | awk &#39;{print $2}&#39; | tr -d "addr:" | head -n 1 | cut -d / -f1&#96;
yum install -y chrony
sed -i &#39;3,6s/^/#/g&#39; /etc/chrony.conf
sed -i "7s|^|server $IP iburst|g" /etc/chrony.conf
echo "allow all" >> /etc/chrony.conf
echo "local stratum 10" >> /etc/chrony.conf
systemctl restart chronyd
systemctl enable chronyd
timedatectl set-ntp true
sleep 5
systemctl restart chronyd
chronyc sources
#方式2&#xff1a;时间同步 注意&#xff1a;系统重启后恢复成原时间
yum install ntpdate -y
ntpdate ntp1.aliyun.com

三、安装containerd&#xff08;所有节点操作&#xff09;

3.1、安装containerd

下载containerd包
首先访问https://github.com/&#xff0c;搜索containerd&#xff0c;进入项目找到Releases&#xff0c;下拉找到对应版本的tar包&#xff0c;如图所示&#xff1a;
在这里插入图片描述

$ tar Cvzxf /usr/local containerd-1.6.9-linux-amd64.tar.gz
# 通过 systemd 启动 containerd
$ vi /etc/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Description&#61;containerd container runtime
Documentation&#61;https://containerd.io
After&#61;network.target local-fs.target
[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment&#61;"ENABLE_CRI_SANDBOXES&#61;sandboxed"
ExecStartPre&#61;-/sbin/modprobe overlay
ExecStart&#61;/usr/local/bin/containerd
Type&#61;notify
Delegate&#61;yes
KillMode&#61;process
Restart&#61;always
RestartSec&#61;5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC&#61;infinity
LimitCORE&#61;infinity
LimitNOFILE&#61;infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax&#61;infinity
OOMScoreAdjust&#61;-999
[Install]
WantedBy&#61;multi-user.target

# 加载配置、启动
systemctl daemon-reload
systemctl enable --now containerd
# 验证
ctr version
#生成配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

3.2、安装runc

#下载runc地址&#xff1a;https://github.com/opencontainers/runc/releases
# 安装
install -m 755 runc.amd64 /usr/local/sbin/runc
# 验证
runc -v

3.3、安装CNI

#下载CNI地址&#xff1a;https://github.com/containernetworking/plugins/releases
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

3.4、配置加速器

#参考&#xff1a;https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration
#添加 config_path &#61; "/etc/containerd/certs.d"
sed -i &#39;s/config_path\ &#61;.*/config_path &#61; \"\/etc\/containerd\/certs.d\"/g&#39; /etc/containerd/config.toml
mkdir /etc/containerd/certs.d/docker.io -p
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server &#61; "https://docker.io"
[host."https://vh3bm52y.mirror.aliyuncs.com"]
capabilities &#61; ["pull", "resolve"]
EOF


systemctl daemon-reload && systemctl restart containerd

四、cgroup 驱动&#xff08;所有节点操作&#xff09;

在 Linux 上&#xff0c;控制组&#xff08;CGroup&#xff09;用于限制分配给进程的资源。

kubelet 和底层容器运行时都需要对接控制组 为 Pod 和容器管理资源 &#xff0c;如 CPU、内存这类资源设置请求和限制。
若要对接控制组&#xff08;CGroup&#xff09;&#xff0c;kubelet 和容器运行时需要使用一个 cgroup 驱动。 关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。

#把SystemdCgroup &#61; false修改为&#xff1a;SystemdCgroup &#61; true
sed -i &#39;s/SystemdCgroup\ &#61;\ false/SystemdCgroup\ &#61;\ true/g&#39; /etc/containerd/config.toml
#把sandbox_image &#61; "k8s.gcr.io/pause:3.6"修改为&#xff1a;sandbox_image&#61;"registry.aliyuncs.com/google_containers/pause:3.8"
sed -i &#39;s/sandbox_image\ &#61;.*/sandbox_image\ &#61;\ "registry.aliyuncs.com\/google_containers\/pause:3.8"/g&#39; /etc/containerd/config.toml|grep sandbox_image
systemctl daemon-reload
systemctl restart containerd

五、安装crictl&#xff08;所有节点操作&#xff09;

kubernetes中使用crictl管理容器&#xff0c;不使用ctr。

crictl 是 CRI 兼容的容器运行时命令行接口。 可以使用它来检查和调试 Kubernetes 节点上的容器运行时和应用程序。

#配置crictl对接ctr容器运行时。
tar -vzxf crictl-v1.25.0-linux-amd64.tar.gz
mv crictl /usr/local/bin/
cat >> /etc/crictl.yaml << EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: true
EOF

systemctl restart containerd

六、kubeadm部署集群

6.1、安装kubeadm、kubelet、kubectl&#xff08;所有节点操作&#xff09;

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name&#61;Kubernetes
baseurl&#61;https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled&#61;1
gpgcheck&#61;1
repo_gpgcheck&#61;1
gpgkey&#61;https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install --nogpgcheck kubelet-1.25.3 kubeadm-1.25.3 kubectl-1.25.3 -y
systemctl enable kubelet

yum安装时出现错误&#xff0c;如下&#xff1a;


# 详情参考&#xff1a;https://blog.csdn.net/Dan1374219106/article/details/112450922
[root&#64;master ~]# yum install --nogpgcheck kubelet
Loaded plugins: fastestmirror
Existing lock /var/run/yum.pid: another copy is running as pid 8721.
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 44 M RSS (444 MB VSZ)
Started: Fri Nov 11 20:40:32 2022 - 02:07 ago
State : Traced/Stopped, pid: 8721
#解决方法
[root&#64;master ~]# rm -f /var/run/yum.pid

6.2、kubeadm初始化&#xff08;master节点操作&#xff09;


#查看我们kubeadm版本&#xff0c;这里为GitVersion:"v1.25.3"
[root&#64;master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:55:36Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
# 生成默认配置文件
$ kubeadm config print init-defaults > kubeadm.yaml
$ vi kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.200.5 # 修改为宿主机ip
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: master # 修改为宿主机名
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 修改为阿里镜像
kind: ClusterConfiguration
kubernetesVersion: 1.25.3 # kubeadm的版本为多少这里就修改为多少
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16 ## 设置pod网段
scheduler: {}
###添加内容&#xff1a;配置kubelet的CGroup为systemd
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
#下载镜像
$ kubeadm config images pull --image-repository&#61;registry.aliyuncs.com/google_containers --kubernetes-version&#61;v1.25.3
#初始化
$ kubeadm init --config kubeadm.yaml
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG&#61;/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.200.5:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f

# master节点执行
[root&#64;master ~]# mkdir -p $HOME/.kube
[root&#64;master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root&#64;master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# node节点执行
[root&#64;node ~]# kubeadm join 192.168.200.5:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f

# master节点执行
[root&#64;master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 3m25s v1.25.4
node NotReady <none> 118s v1.25.4

6.3、部署网络&#xff08;master节点操作&#xff09;


6.3.1、说明



本博客测试时&#xff0c;由于ctr拉取镜像特别慢&#xff0c;所有我们这里采用docker拉取镜像。首先在node节点安装docker-ce&#xff0c;拉取calico网络插件需要的镜像&#xff0c;再使用docker save命令打包后上传镜像到master节点。步骤如下&#xff1a;


注意&#xff1a;使用calico.yaml下载地址下载后有可能不能使用&#xff0c;报错信息如下。打开配置文件后发现镜像版本为 v3.14.2。这个版本本人测试时是不可用的。报错信息&#xff1a;resource mapping not found for name: "bgpconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
版本“apiextension.k8s.io/v1beta1”中的类型“CustomResourceDefinition”不匹配

#如果遇到上述情况&#xff0c;请下载使用本博客提供的calico.yaml文件。

yum install -y wget
wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml --no-check-certificate
#部署calico报错&#xff0c;使用本博客所提供的calico文件部署
[root&#64;master ~]# kubectl apply -f calico.yaml
configmap/calico-config configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers configured
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node configured
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16&#43;; use the "priorityClassName" field instead
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
resource mapping not found for name: "bgpconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "bgppeers.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "blockaffinities.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "clusterinformations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "felixconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "globalnetworkpolicies.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "globalnetworksets.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "hostendpoints.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "ipamblocks.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "ipamconfigs.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "ipamhandles.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "ippools.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "kubecontrollersconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "networkpolicies.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "networksets.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first

6.3.2、操作&#xff08;calico下载&#xff09;

关注微信公众号&#xff1a;CloudLog无名小歌 或 秋意临&#xff0c;回复calico获取下载。

node节点安装docker-ce&#xff0c;并拉取镜像如下

#安装docker-ce
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i &#39;s&#43;download.docker.com&#43;mirrors.aliyun.com/docker-ce&#43;&#39; /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce
#配置加速器
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"]
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker
#拉取镜像、打包镜像
docker pull docker.io/calico/node:v3.24.4
docker save -o calico_node_v3.24.4.tar docker.io/calico/node:v3.24.4
docker pull docker.io/calico/cni:v3.24.4
docker save -o calico_cni_v3.24.4.tar docker.io/calico/cni:v3.24.4
docker pull docker.io/calico/kube-controllers:v3.24.4
docker save -o calico_kube-controllers_v3.24.4.tar docker.io/calico/kube-controllers:v3.24.4

master节点执行



由于container有命名空间的概念&#xff0c;kubernetes的名称空间为k8s.io。


#导入镜像到k8s.io名称空间
ctr -n k8s.io image import calico_node_v3.24.4.tar
ctr -n k8s.io image import calico_cni_v3.24.4.tar
ctr -n k8s.io image import calico_kube-controllers_v3.24.4.tar
#查看镜像是否导入到k8s.io名称空间
[root&#64;master ~]# crictl images
...
...
IMAGE TAG IMAGE ID SIZE
docker.io/calico/cni v3.24.4 0b046c51c02a8 198MB
docker.io/calico/kube-controllers v3.24.4 0830ebe059a9e 71.4MB
docker.io/calico/node v3.24.4 32c45127e587f 226MB
registry.aliyuncs.com/google_containers/coredns v1.9.3 5185b96f0becf 14.8MB
registry.aliyuncs.com/google_containers/etcd 3.5.4-0 a8a176a5d5d69 102MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.25.3 0346dbd74bcb9 34.2MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.25.3 6039992312758 31.3MB
registry.aliyuncs.com/google_containers/kube-proxy v1.25.3 beaaf00edd38a 20.3MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.25.3 6d23ec0e8b87e 15.8MB
registry.aliyuncs.com/google_containers/pause 3.8 4873874c08efc 311kB
#下载本博客提供的calico.yaml文件后运行。运行后等待几分钟后即可。
[root&#64;master ~]# kubectl apply -f calico.yaml
[root&#64;master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-c676cc86f-ddp44 1/1 Running 0 87m
kube-system coredns-c676cc86f-mg278 1/1 Running 0 87m
kube-system etcd-master 1/1 Running 0 87m
kube-system kube-apiserver-master 1/1 Running 0 87m
kube-system kube-controller-manager-master 1/1 Running 0 87m
kube-system kube-proxy-75svm 1/1 Running 0 87m
kube-system kube-proxy-7bl66 1/1 Running 0 87m
kube-system kube-scheduler-master 1/1 Running 0 87m
[root&#64;master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 87m v1.25.3
node Ready <none> 86m v1.25.3

至此kubernetes集群搭建完毕~~~


总结

我是秋意临&#xff0c;欢迎大家一键三连、加入云社区

(⊙o⊙)&#xff0c;我们下期再见&#xff01;&#xff01;&#xff01;


参考文档

containerd&#xff1a;https://github.com/containerd/containerd/blob/main/docs/getting-started.md
kubernetes&#xff1a;https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/







推荐阅读
  • 本文介绍了5个基本Linux命令行工具的现代化替代品,包括du、top和ncdu。这些替代品在功能上进行了改进,提高了可用性,并且适用于现代化系统。其中,ncdu是du的替代品,它提供了与du类似的结果,但在一个基于curses的交互式界面中,重点关注占用磁盘空间较多的目录。 ... [详细]
  • 大坑|左上角_pycharm连接服务器同步写代码(图文详细过程)
    篇首语:本文由编程笔记#小编为大家整理,主要介绍了pycharm连接服务器同步写代码(图文详细过程)相关的知识,希望对你有一定的参考价值。pycharm连接服务 ... [详细]
  • 现在比较流行使用静态网站生成器来搭建网站,博客产品着陆页微信转发页面等。但每次都需要对服务器进行配置,也是一个重复但繁琐的工作。使用DockerWeb,只需5分钟就能搭建一个基于D ... [详细]
  • 三、寻找恶意IP并用iptables禁止掉找出恶意连接你的服务器80端口的IP,直接用iptables来drop掉它;这里建议写脚本来运行, ... [详细]
  • linux下编译安装lnmp
    2019独角兽企业重金招聘Python工程师标准#######################安装依赖#####################安装必要的包:y ... [详细]
  • 在Kubernetes上部署JupyterHub的步骤和实验依赖
    本文介绍了在Kubernetes上部署JupyterHub的步骤和实验所需的依赖,包括安装Docker和K8s,使用kubeadm进行安装,以及更新下载的镜像等。 ... [详细]
  • 本文介绍了在rhel5.5操作系统下搭建网关+LAMP+postfix+dhcp的步骤和配置方法。通过配置dhcp自动分配ip、实现外网访问公司网站、内网收发邮件、内网上网以及SNAT转换等功能。详细介绍了安装dhcp和配置相关文件的步骤,并提供了相关的命令和配置示例。 ... [详细]
  • Webmin远程命令执行漏洞复现及防护方法
    本文介绍了Webmin远程命令执行漏洞CVE-2019-15107的漏洞详情和复现方法,同时提供了防护方法。漏洞存在于Webmin的找回密码页面中,攻击者无需权限即可注入命令并执行任意系统命令。文章还提供了相关参考链接和搭建靶场的步骤。此外,还指出了参考链接中的数据包不准确的问题,并解释了漏洞触发的条件。最后,给出了防护方法以避免受到该漏洞的攻击。 ... [详细]
  • 本文介绍了在CentOS上安装Python2.7.2的详细步骤,包括下载、解压、编译和安装等操作。同时提供了一些注意事项,以及测试安装是否成功的方法。 ... [详细]
  • 如何使用PLEX播放组播、抓取信号源以及设置路由器
    本文介绍了如何使用PLEX播放组播、抓取信号源以及设置路由器。通过使用xTeve软件和M3U源,用户可以在PLEX上实现直播功能,并且可以自动匹配EPG信息和定时录制节目。同时,本文还提供了从华为itv盒子提取组播地址的方法以及如何在ASUS固件路由器上设置IPTV。在使用PLEX之前,建议先使用VLC测试是否可以正常播放UDPXY转发的iptv流。最后,本文还介绍了docker版xTeve的设置方法。 ... [详细]
  • {moduleinfo:{card_count:[{count_phone:1,count:1}],search_count:[{count_phone:4 ... [详细]
  • ZABBIX 3.0 配置监控NGINX性能【OK】
    1.在agent端查看配置:nginx-V查看编辑时是否加入状态监控模块:--with-http_stub_status_module--with-http_gzip_stat ... [详细]
  • 本文主要介绍关于linux文件描述符设置,centos7设置文件句柄数,centos7查看进程数的知识点,对【Linux之进程数和句柄数】和【linux句柄数含义】有兴趣的朋友可以看下由【东城绝神】投 ... [详细]
  • linux 循环 cpu使用率脚本,Linux Shell脚本监视CPU利用率,达到设置的CPU利用率时发送电子邮件...
    有很多开源监控工具可用于监控Linux系统性能,当系统达到给定的阈值限制时,它将发送电子邮件警报。它监视CPU利用率、内存利用率、交换利用率、磁盘空间利 ... [详细]
  • “自主设计与实施的故障注入微服务Sidecar,欢迎大佬批评指正!”
    “故障注入Sidecar“——为您的微服务注入故障以验证集群性能!由于导师和实验室师兄们的科研需要,本人专门以Sidecar的模式设计了一个用于错误注入的微服务模块。该模块可以与任 ... [详细]
author-avatar
天王2502871933
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有