建筑公司网站建设,网站域名包括哪些,网站后台的网址忘记了,怎么做全息网站文章目录 前言实验架构介绍K8S集群部署方式说明使用CloudFormation部署EC2实例集群环境准备修改主机名并配置域名解析#xff08;ALL节点#xff09;禁用防火墙禁用SELinux加载br_netfilter模块安装ipvs安装 ipset 软件包同步服务器时间关闭swap分区安装Containerd 初始化集群… 文章目录 前言实验架构介绍K8S集群部署方式说明使用CloudFormation部署EC2实例集群环境准备修改主机名并配置域名解析ALL节点禁用防火墙禁用SELinux加载br_netfilter模块安装ipvs安装 ipset 软件包同步服务器时间关闭swap分区安装Containerd 初始化集群安装 kubeadm、kubelet、kubectl修改kubeadm.yaml配置文件添加工作节点安装 flannel 安装Dashboard清理 前言
在这个数字化快速发展的时代云计算已经成为了企业和个人实现快速扩展和高效运营的关键手段。其中AWS云实例由于其强大的功能和灵活性已经成为众多用户的首选。而Kubernetes作为容器编排的领导者能帮助用户轻松管理和部署容器化应用。本文将详细介绍如何基于AWS云实例(CentOS 7.9系统)使用kubeadm方式搭建部署Kubernetes集群1.25.4版本帮助读者深入了解相关概念和操作方法。 Kubernetes是一个开源的容器编排平台它能够自动化容器的部署、扩展、管理和升级。kubeadm是Kubernetes的一个核心组件它可以帮助我们在物理机或虚拟机上快速搭建一个Kubernetes集群。而AWS云实例则是一种在云上运行虚拟机的方式用户可以根据需求选择不同配置的实例以满足各种工作负载的需求。 本文的主要目的在于介绍如何使用kubeadm在AWS云实例上部署Kubernetes集群1.25.4版本。通过本文的阅读读者将了解如何规划并准备CentOS 7.9系统来运行Kubernetes集群掌握kubeadm命令的使用方法并熟悉在AWS云实例上部署和配置Kubernetes集群的全过程。 要成功搭建一个高效的Kubernetes集群需要综合考虑多个方面包括硬件资源、网络配置、存储选择、安全性等。而使用AWS云实例可以为我们提供一种快速、灵活且可靠的方式来创建和管理Kubernetes集群。在实际应用中Kubernetes集群可以帮助企业实现应用的高可用性、可扩展性和自动化管理从而降低运营成本并提高工作效率。 实验架构介绍
本次实验将采用三台云主机实例其中一个作为master管理节点另外两台作为node工作节点来搭建部署完成kubernetes集群。
云实例规格配置参数 云主机名称CPU核数内存大小GB磁盘大小GBOS系统版本k8smaster48200CentOS 7.9k8snode0148200CentOS 7.9k8snode0248200CentOS 7.9 K8S集群部署方式说明
本实验采用kubeadm工具搭建部署Kubernetes集群。
使用CloudFormation部署EC2实例 集群环境准备
修改主机名并配置域名解析ALL节点
[rootip-172-31-18-204 ~]# hostname
ip-172-31-18-204.ap-northeast-1.compute.internal
[rootip-172-31-18-204 ~]# hostnamectl set-hostname master
[rootip-172-31-18-204 ~]#
[rootip-172-31-18-204 ~]# bash
[rootmaster ~]# hostname
master
[rootmaster ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost6 localhost6.localdomain6
172.31.25.16 master
172.31.25.17 node01
172.31.25.18 node02禁用防火墙
systemctl stop firewalld
systemctl disable firewalld禁用SELinux
setenforce 0
getenforce加载br_netfilter模块
modprobe br_netfilterPS最好将上面的命令设置成开机启动因为重启后模块失效下面是开机自动加载模块的方式在 etc/rc.d/rc.local 文件末尾添加如下脚本内容 for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] $file
done然后在 /etc/sysconfig/modules/ 目录下新建如下文件
mkdir -p /etc/sysconfig/modules/
vim /etc/sysconfig/modules/br_netfilter.modules
modprobe br_netfilter增加权限
chmod 755 /etc/sysconfig/modules/br_netfilter.modules重启后模块就可以自动加载
[rootmaster ~]# lsmod | grep br_netfilter
br_netfilter 32768 0
bridge 262144 1 br_netfilter创建 /etc/sysctl.d/k8s.conf 文件添加如下内容
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
net.ipv4.ip_forward 1
# 下面的内核参数可以解决ipvs模式下长连接空闲超时的问题
net.ipv4.tcp_keepalive_intvl 30
net.ipv4.tcp_keepalive_probes 10
net.ipv4.tcp_keepalive_time 600执行此命令使其生效
[rootmaster ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
net.ipv4.ip_forward 1
net.ipv4.tcp_keepalive_intvl 30
net.ipv4.tcp_keepalive_probes 10
net.ipv4.tcp_keepalive_time 600安装ipvs
cat /etc/sysconfig/modules/ipvs.modules EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOFchmod 755 /etc/sysconfig/modules/ipvs.modules bash
/etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs -e nf_conntrack_ipv4上面脚本创建了 /etc/sysconfig/modules/ipvs.modules 文件保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4 命令查看是否已经正确加载所需的内核模块。
安装 ipset 软件包
为了便于查看 ipvs 的代理规则安装管理工具ipvsadm
yum install -y ipset
yum install -y ipvsadm同步服务器时间
使用 chrony 来进行时间同步
yum install -y chrony
systemctl enable chronyd
systemctl start chronydchronyc sourcesdate关闭swap分区
swappoff -a修改 /etc/fstab 文件注释掉 SWAP 的自动挂载使用 free -m 确认 swap 已经关闭。swappiness 参数调整修改 /etc/sysctl.d/k8s.conf 添加下面一行
vm.swappiness0执行 sysctl -p /etc/sysctl.d/k8s.conf 使修改生效。
安装Containerd
首先需要在节点上安装 seccomp 依赖
[rootmaster ~]# rpm -qa | grep libseccomp
libseccomp-2.4.1-1.amzn2.x86_64[rootmaster ~]# wget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm[rootmaster ~]# yum remove libseccomp-2.3.1-4.el7.x86_64[rootmaster ~]# rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm[rootmaster ~]# rpm -qa | grep libseccomp
libseccomp-2.5.1-1.el8.x86_64由于 Containerd 需要依赖底层的 runc 工具所以需要先安装 runc不过 Containerd 提供了一个包含相关依赖的压缩包 cri-containerd-cni- V E R S I O N . {VERSION}. VERSION.{OS}-${ARCH}.tar.gz 可以直接使用这个包来进行安装不然可能因为 runc 版本问题导致不兼容。
注意注意区分ARM架构和x86_64架构得包得区别避免安装上出现兼容报错问题。
# ARM架构
[rootmaster ~]# wget https://github.com/containerd/containerd/releases/download/v1.6.24/cri-containerd-1.6.24-linux-arm64.tar.gz# x86_64架构
[rootmaster ~]# wget https://github.com/containerd/containerd/releases/download/v1.6.24/cri-containerd-1.6.24-linux-amd64.tar.gz解压压缩包到系统的各个目录中
[rootmaster ~]# tar -C / -xzf cri-containerd-1.6.24-linux-amd64.tar.gz将 /usr/local/bin 和 /usr/local/sbin 追加到 PATH 环境变量中
export PATH/usr/local/bin:/usr/local/sbin:$PATH[rootmaster ~]# echo $PATH
/usr/local/bin:/usr/local/sbin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin执行命令查看containerd版本信息。
[rootmaster ~]# containerd -v
containerd github.com/containerd/containerd v1.6.24 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523[rootmaster ~]# runc -h
NAME:runc - Open Container Initiative runtimerunc is a command line client for running applications packaged according to
the Open Container Initiative (OCI) format and is a compliant implementation of the
Open Container Initiative specification.runc integrates well with existing process supervisors to provide a production
container runtime environment for applications. It can be used with your
existing process monitoring tools and the container will be spawned as a
direct child of the process supervisor.Containers are configured using bundles. A bundle for a container is a directory
that includes a specification file named config.json and a root filesystem.
The root filesystem contains the contents of the container.To start a new instance of a container:# runc run [ -b bundle ] container-idWhere container-id is your name for the instance of the container that you
are starting. The name you provide for the container instance must be unique on
your host. Providing the bundle directory using -b is optional. The default
value for bundle is the current directory.USAGE:runc [global options] command [command options] [arguments...]VERSION:1.1.9
commit: v1.1.9-0-gccaecfcb
spec: 1.0.2-dev
go: go1.20.8
libseccomp: 2.5.1COMMANDS:checkpoint checkpoint a running containercreate create a containerdelete delete any resources held by the container often used with detached containerevents display container events such as OOM notifications, cpu, memory, and IO usage statisticsexec execute new process inside the containerkill kill sends the specified signal (default: SIGTERM) to the containers init processlist lists containers started by runc with the given rootpause pause suspends all processes inside the containerps ps displays the processes running inside a containerrestore restore a container from a previous checkpointresume resumes all processes that have been previously pausedrun create and run a containerspec create a new specification filestart executes the user defined process in a created containerstate output the state of a containerupdate update container resource constraintsfeatures show the enabled featureshelp, h Shows a list of commands or help for one commandGLOBAL OPTIONS:--debug enable debug logging--log value set the log file to write runc logs to (default is /dev/stderr)--log-format value set the log format (text (default), or json) (default: text)--root value root directory for storage of container state (this should be located in tmpfs) (default: /run/runc)--criu value path to the criu binary used for checkpoint and restore (default: criu)--systemd-cgroup enable systemd cgroup support, expects cgroupsPath to be of form slice:prefix:name for e.g. system.slice:runc:434234--rootless value ignore cgroup permission errors (true, false, or auto) (default: auto)--help, -h show help--version, -v print the version
[rootmaster ~]#Containerd 的默认配置文件为 /etc/containerd/config.toml 可以通过如下所示的命令生成一个默认的配置
[rootmaster ~]# containerd config default /etc/containerd/config.toml由于上面下载的 Containerd 压缩包中包含一个 /etc/systemd/system/containerd.service 的文件可以通过 systemd 来配置Containerd 作为守护进程运行了现在可以启动 Containerd直接执行下面的命令即可。
[rootmaster ~]# ctr version
Client:Version: v1.6.24Revision: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523Go version: go1.20.8Server:Version: v1.6.24Revision: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523UUID: 6b5ae56c-4b18-405d-9ac0-2e4de10c3596初始化集群
上面的相关环境配置完成后接着就可以安装 Kubeadm等软件工具通过指定 yum 源的方式进行安装
cat EOF /etc/yum.repos.d/kubernetes.repo
[kubernetes]
nameKubernetes
baseurlhttps://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled1
gpgcheck1
repo_gpgcheck1
gpgkeyhttps://packages.cloud.google.com/yum/doc/yum-key.gpg \
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF安装 kubeadm、kubelet、kubectl
## 生成本地缓存
yum makecache fast## --disableexcludes 禁掉除了kubernetes之外的别的仓库
yum install -y kubelet-1.25.4 kubeadm-1.25.4 kubectl-1.25.4 --disableexcludeskubernetes查看kubeadm版本信息。
[rootmaster ~]# kubeadm version
kubeadm version: version.Info{Major:1, Minor:25, GitVersion:v1.25.4, GitCommit:872a965c6c6526caa949f0c6ac028ef7aff3fb78, GitTreeState:clean, BuildDate:2022-11-09T13:35:06Z, GoVersion:go1.19.3, Compiler:gc, Platform:linux/amd64}可以看到这里安装的是 v1.25.4 版本然后将 master 节点的 kubelet 设置成开机启动
[rootmaster ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.到此为止上面所有的操作都需要在所有节点执行配置。 在AWS云环境中采用云主机的方式进行集群的搭建部署我们可以将当前环境直接做成一个镜像然后创建新节点的时候直接使用该镜像即可这样可以避免重复的工作。
具体创建映像操作步骤如下 该镜像创建完成可以直接使用该镜像文件进行其他新的节点的创建避免重复上述重复操作。 使用刚刚制作好的镜像重新创建新的实例。 新的实例创建完成。 其他两个节点创建完成。 修改kubeadm.yaml配置文件
接下来可以通过下面的命令在 master 节点上输出集群初始化默认使用的配置文件
kubeadm config print init-defaults --component-configs KubeletConfiguration kubeadm.yaml然后根据我们自己的需求修改配置比如修改 imageRepository 指定集群初始化时拉取 Kubernetes 所需镜像的地址kube-proxy 的模式为 ipvs另外需要注意的是这里准备安装 flannel 网络插件的需要将networking.podSubnet 设置为 10.244.0.0/16
[rootmaster ~]# vim kubeadm.yaml
[rootmaster ~]# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 172.31.25.16bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: nodetaints: null
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.25.4
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16 # 指定Pod子网
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # kube-proxy 模式
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:anonymous:enabled: falsewebhook:cacheTTL: 0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 0scacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:flushFrequency: 0options:json:infoBufferSize: 0verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s在开始初始化集群之前可以使用 kubeadm config images pull --config kubeadm.yaml 预先在各个服务器节点上拉取所 k8s 需要的容器镜像。
[rootmaster ~]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.4
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.4
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.4
[config/images] Pulled registry.k8s.io/kube-proxy:v1.25.4
[config/images] Pulled registry.k8s.io/pause:3.8
[config/images] Pulled registry.k8s.io/etcd:3.5.5-0
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3然后就可以使用上面的配置文件在 master 节点上进行初始化
[rootmaster ~]# kubeadm init --config kubeadm.yamlYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.31.25.16:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:6832d7743d1bbc253e467d3e1c06862e1eb22be5c75fff6f619926f6d0443d33根据安装提示拷贝 kubeconfig 文件
[rootmaster ~]# mkdir -p $HOME/.kube
[rootmaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[rootmaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[rootmaster ~]# export KUBECONFIG/etc/kubernetes/admin.conf然后可以使用 kubectl 命令查看 master 节点是否已经初始化成功
[rootmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node NotReady control-plane 94s v1.25.4现在节点还处于 NotReady 状态是因为还未安装 CNI 插件可以先添加Node 节点后再部署网络插件。
添加工作节点
执行上面初始化完成后提示的 join 命令即可。
kubeadm join 172.31.25.16:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:6832d7743d1bbc253e467d3e1c06862e1eb22be5c75fff6f619926f6d0443d33如下图所示该节点已经加入到该集群。 在master节点上通过执行kubectl get nodes命令可以查看到其他两个节点已经添加进集群之中。
[rootmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node NotReady control-plane 9m44s v1.25.4
node01 NotReady none 4m19s v1.25.4
node02 NotReady none 3m40s v1.25.4PS如果忘记了上面的 join 命令可以使用该命令 kubeadm token create --print-join-command 重新获取。 该token默认有效期为24小时。 [rootmaster ~]# kubeadm token create --print-join-command
kubeadm join 172.31.25.16:6443 --token g94xpb.jn65v6eux6p1hjpa --discovery-token-ca-cert-hash sha256:6832d7743d1bbc253e467d3e1c06862e1eb22be5c75fff6f619926f6d0443d33安装 flannel 接下来安装网络插件可以在文档https://kubernetes.io/docs/concepts/cluster-administration/addons 中选择我们自己的网络插件。 [rootmaster ~]# wget https://raw.githubusercontent.com/flannel-io/flannel/v0.20.1/Documentation/kube-flannel.yml如果有节点是多网卡则需要在资源清单文件中指定内网网卡。搜索到名为 kube-flannel-ds 的 DaemonSet在kube-flannel容器下面搜索 [rootmaster ~]# vim kube-flannel.yml
[rootmaster ~]# cat kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:name: kube-flannellabels:pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
rules:
- apiGroups:- resources:- podsverbs:- get
- apiGroups:- resources:- nodesverbs:- list- watch
- apiGroups:- resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-flannellabels:tier: nodeapp: flannel
data:cni-conf.json: |{name: cbr0,cniVersion: 0.3.1,plugins: [{type: flannel,delegate: {hairpinMode: true,isDefaultGateway: true}},{type: portmap,capabilities: {portMappings: true}}]}net-conf.json: |{Network: 10.244.0.0/16,Backend: {Type: vxlan}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-flannellabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-plugin#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cni#image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannel#image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgr- --ifaceeth0 # 如果是多网卡的话指定内网网卡的名称resources:requests:cpu: 100mmemory: 50Milimits:cpu: 100mmemory: 50MisecurityContext:privileged: falsecapabilities:add: [NET_ADMIN, NET_RAW]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: 5000volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate执行命令安装flannel网络插件
[rootmaster ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created查看 Pod 运行状态
[rootmaster ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-f6qvb 1/1 Running 0 33s
kube-flannel kube-flannel-ds-twjxp 1/1 Running 0 33s
kube-flannel kube-flannel-ds-xcfb2 1/1 Running 0 33s
kube-system coredns-565d847f94-98zx2 1/1 Running 0 19m
kube-system coredns-565d847f94-v9pxj 1/1 Running 0 19m
kube-system etcd-node 1/1 Running 0 20m
kube-system kube-apiserver-node 1/1 Running 0 20m
kube-system kube-controller-manager-node 1/1 Running 0 20m
kube-system kube-proxy-26wqq 1/1 Running 0 19m
kube-system kube-proxy-bmcrs 1/1 Running 0 14m
kube-system kube-proxy-spp5z 1/1 Running 0 14m
kube-system kube-scheduler-node 1/1 Running 0 20m此时所有节点都处于Ready状态。
[rootmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node Ready control-plane 22m v1.25.4
node01 Ready none 16m v1.25.4
node02 Ready none 16m v1.25.4安装Dashboard
直接执行下面命令一键安装即可
# 下载recommended.yaml配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml# 编辑recommended.yaml配置文件
vim recommended.yaml
# 加上typeNodePort变成NodePort类型的服务执行命令创建dashboard
[rootmaster ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created新版本的 Dashboard 会被默认安装在 kubernetes-dashboard 这个kubernetes-dashboard命名空间
[rootmaster ~]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-64bcc67c9c-6wnb6 1/1 Running 0 54s
kubernetes-dashboard-5c8bd6b59-ndbxk 1/1 Running 0 54s查看 Dashboard 的 NodePort 端口
[rootmaster ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.105.219.37 none 8000/TCP 113s
kubernetes-dashboard NodePort 10.98.61.185 none 443:30477/TCP 113s通过上面的 30477端口去访问 Dashboard使用https。 创建一个具有全局所有权限的用户来登录 Dashboard
# admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard执行命令创建admin-user用户
[rootmaster ~]# kubectl apply -f admin.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created现在需要找到可以用来登录的令牌可以使用 kubectl create token 命令来请求一个 service account token。 # 请求创建一个 token 作为 kubernetes-dashboard 命名空间中的 admin-user 这个 sa 对 kubeapiserver 进行身份验证
kubectl create token admin-user -n kubernetes-dashboard[rootmaster ~]# kubectl create token admin-user -n kubernetes-dashboard
eyJhbGciOiJSUzI1NiIsImtpZCI6InpyT0NzSG5XbkdaQXVjclljVXUxOWt0TF9fOG1XTW15X2hYVmRweTNEU1EifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjk2NjkxMDYwLCJpYXQiOjE2OTY2ODc0NjAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiY2Q2MjhmMmUtZmZkMy00MDdmLWI1YWUtYTVkZGYzYjIxYjQyIn19LCJuYmYiOjE2OTY2ODc0NjAsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.FwSGY3f7G3rfJozjBEGQ_u6ASv31ZFicYFFjfkBAMdkPgbI9ETPH3ciu3Ma-24bMuPpbJDxgVGAB0D12IzZKJUk_YWyTul258PEzjotUJuU3OUM_ui9Ciuex20rsUv4MrhyfT26XorJT4FhiuLsszKjgqfRfFS-4oVjAbSBoiZj5UJ1yX0wD7Trc22QF-m7CvPZekl83bhzFt6XUX6-y-To6zGfh4pBbI1Q_OVgBrLElSdowG7OI3nLL9zMVQKhKSsCKgUYMbD1DseJfEUozmzL_NXm3e54yYDMS9yyF1dqsMtntwuo6wtZv0nc7POBCAqgYGCIOMRTUpqCwa48sFQ清理
如果集群安装过程中遇到了其他问题可以使用下面的命令来进行重置
kubeadm resetifconfig cni0 down ip link delete cni0ifconfig flannel.1 down ip link delete flannel.1rm -rf /var/lib/cni/至此完成了基于AWS云实例采用 kubeadm 方式搭建部署 v1.25.4 版本的 kubernetes 集群的实验。