网站列表功能,古典家具公司网站模板,天河网站建设开发,学做网站记不住代码kubernetes集群使用GPU及安装kubeflow1.0.RC操作步骤 Kubeflow使用场景 希望训练tensorflow模型且可以使用模型接口发布应用服务在k8s环境中(eg.local,prem,cloud) 希望使用Jupyter notebooks来调试代码#xff0c;多用户的notebook server 在训练的Job中#xff0c;需要对…kubernetes集群使用GPU及安装kubeflow1.0.RC操作步骤 Kubeflow使用场景 希望训练tensorflow模型且可以使用模型接口发布应用服务在k8s环境中(eg.local,prem,cloud) 希望使用Jupyter notebooks来调试代码多用户的notebook server 在训练的Job中需要对的CPU或者GPU资源进行调度编排 希望Tensorflow和其他组件进行组合来发布服务
依赖库 ksonnet 0.11.0以上版本 /可以直接从github上下载scp ks文件到usr/local/bin kubernetes 1.8以上直接使用CCE服务节点需要创建一个CCE集群和若干节点并为某个节点绑定EIP kubectl tools 1、安装ksonnet ksonnet 安装过程可以去网址里面查看ks最新版本
wget https://github.com/ksonnet/ksonnet/releases/download/v0.13.0/ks_0.13.0_linux_amd64.tar.gz
tar -vxf ks_0.13.0_linux_amd64.tar.gz
cd -vxf ks_0.13.0_linux_amd64
sudo cp ks /usr/local/bin安装完成后 安装显卡驱动
安装CUDA
sudo yum-config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo
sudo yum clean all
sudo yum -y install nvidia-driver-latest-dkms cuda
sudo yum -y install cuda-drivers如缺少gcc依赖则实行如下命令 yum install kernel-devel kernel-doc kernel-headers gcc\* glibc\* glibc-\*安装nvidia驱动 rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.orgrpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpmyum install -y kmod-nvidia禁用nouvean
###在GRUB_CMDLINE_LINUX添加 rdblacklistnouveau 项
echo -e blacklist nouveau\noptions nouveau modeset0 /etc/modprobe.d/blacklist.conf重启查看nouveau是否被禁用成功
lsmod|grep nouv
没有任何输出则表示nouveau已被禁用查看服务器显卡信息
[rootmaster ~]# nvidia-smi
Tue Jan 14 03:46:41 2020
-----------------------------------------------------------------------------
| NVIDIA-SMI 440.44 Driver Version: 440.44 CUDA Version: 10.2 |
|---------------------------------------------------------------------------
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
||
| 0 Tesla T4 Off | 00000000:18:00.0 Off | 0 |
| N/A 29C P8 10W / 70W | 0MiB / 15109MiB | 0% Default |
---------------------------------------------------------------------------
| 1 Tesla T4 Off | 00000000:86:00.0 Off | 0 |
| N/A 25C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
--------------------------------------------------------------------------------------------------------------------------------------------------------
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
||
| No running processes found |
-----------------------------------------------------------------------------
安装NVIDIA-DOCKER
下载nvidia-docker.repo文件
curl -s -L https://nvidia.github.io/nvidia-docker/centos7/x86_64/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo 查找NVIDIAdocker版本
yum search --showduplicates nvidia-docker安装NVIDIA-docker
docker版本为docker18.09.7.ce所以安装下述NVIDIAdocker版本
yum install -y nvidia-docker2
pkill -SIGHUP dockerdnvidia-docker version 可查看已安装的nvidia docker版本
修改docker runtimes为nvidia-docker
[rootks-allinone ~]# cat /etc/docker/daemon.json
{default-runtime: nvidia,runtimes: {nvidia: {path: nvidia-container-runtime,runtimeArgs: []}},registry-mirrors: [https://o96k4rm0.mirror.aliyuncs.com]
}
重启docker及k8s
systemctl daemon-reload
systemctl restart docker.service
systemctl restart kubelet安装gpushare-scheduler-extender
cd /etc/kubernetes/
curl -O https://raw.githubusercontent.com/AliyunContainerService/gpushare-scheduler-extender/master/config/scheduler-policy-config.json
cd /tmp/
curl -O https://raw.githubusercontent.com/AliyunContainerService/gpushare-scheduler-extender/master/config/gpushare-schd-extender.yaml
kubectl create -f gpushare-schd-extender.yaml安装device-plugin-rabc
kubectl create -f device-plugin-rbac.yaml
# rbac.yaml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: gpushare-device-plugin
rules:
- apiGroups:- resources:- nodesverbs:- get- list- watch
- apiGroups:- resources:- eventsverbs:- create- patch
- apiGroups:- resources:- podsverbs:- update- patch- get- list- watch
- apiGroups:- resources:- nodes/statusverbs:- patch- update
---
apiVersion: v1
kind: ServiceAccount
metadata:name: gpushare-device-pluginnamespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: gpushare-device-pluginnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: gpushare-device-plugin
subjects:
- kind: ServiceAccountname: gpushare-device-pluginnamespace: kube-system安装device-plugin-ds插件
kubectl create -f device-plugin-ds.yamlapiVersion: extensions/v1beta1
kind: DaemonSet
metadata:name: gpushare-device-plugin-dsnamespace: kube-system
spec:template:metadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: labels:component: gpushare-device-pluginapp: gpusharename: gpushare-device-plugin-dsspec:serviceAccount: gpushare-device-pluginhostNetwork: truenodeSelector:gpushare: truecontainers:- image: registry.cn-hangzhou.aliyuncs.com/acs/k8s-gpushare-plugin:v2-1.11-35eccabname: gpushare# Make this pod as Guaranteed pod which will never be evicted because of nodes resource consumption.command:- gpushare-device-plugin-v2- -logtostderr- --v5#- --memory-unitMiresources:limits:memory: 300Micpu: 1requests:memory: 300Micpu: 1env:- name: KUBECONFIGvalue: /etc/kubernetes/kubelet.conf- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeNamesecurityContext:allowPrivilegeEscalation: falsecapabilities:drop: [ALL]volumeMounts:- name: device-pluginmountPath: /var/lib/kubelet/device-pluginsvolumes:- name: device-pluginhostPath:path: /var/lib/kubelet/device-plugins
参考
https://github.com/AliyunContainerService/gpushare-scheduler-extender
https://github.com/AliyunContainerService/gpushare-device-plugin为共享节点打上gpushare标签
kubectl label node mynode gpusharetrue安装扩展
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.12.1/bin/linux/amd64/kubectl
chmod x ./kubectl
sudo mv ./kubectl /usr/bin/kubectlcd /usr/bin/
wget https://github.com/AliyunContainerService/gpushare-device-plugin/releases/download/v0.3.0/kubectl-inspect-gpushare
chmod ux /usr/bin/kubectl-inspect-gpusharekubectl inspect gpushare ##查看集群GPU使用情况安装k8s负载均衡(v0.8.2)(可选)
wget https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yamlkubectl apply -f metallb.yamlmetallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:namespace: metallb-systemname: config
data:config: |address-pools:- name: defaultprotocol: layer2addresses:- 10.18.5.30-10.18.5.50
kubectl apply -f metallb-config.yaml测试tensorflow
kubectl apply -f tensorflow.yamlapiVersion: extensions/v1beta1
kind: Deployment
metadata:name: tensorflow-gpu
spec:replicas: 1template:metadata:labels:name: tensorflow-gpuspec:containers:- name: tensorflow-gpuimage: tensorflow/tensorflow:1.15.0-py3-jupyterimagePullPolicy: Neverresources:limits:aliyun.com/gpu-mem: 1024ports:- containerPort: 8888
---
apiVersion: v1
kind: Service
metadata:name: tensorflow-gpu
spec:ports:- port: 8888targetPort: 8888nodePort: 30888name: jupyterselector:name: tensorflow-gputype: NodePort查看集群GPU使用情况
[rootmaster ~]# kubectl inspect gpushare
NAME IPADDRESS GPU0(Allocated/Total) GPU1(Allocated/Total) GPU Memory(MiB)
master 10.18.5.20 1024/15109 0/15109 1024/30218
node 10.18.5.21 0/15109 0/15109 0/30218
------------------------------------------------------------------
Allocated/Total GPU Memory In Cluster:
1024/60436 (1%)
[rootmaster ~]#可通过动态伸缩tensorflow service 的节点数量以及修改单个节点的显存大小测试GPU使用情况 kubectl scale --current-replicas1 --replicas100 deployment/tensorflow-gpu经测试得出以下测试结果
环境
节点GPU个数GPU内存master215109M*230218Mnode215109M*230218M
测试结果
podGpupod个数gpu利用率256M18377%512M11698%1024M5694%
安装kubeflowV1.0.RC
安装ks tar -vxf ks_0.12.0_linux_amd64.tar.gzcp ks_0.12.0_linux_amd64/* /usr/local/bin/安装kuberflow
下载安装包
kfctl_v1.0-rc.3-1-g24b60e8_linux.tar.gz
tar -zxvf kfctl_v1.0-rc.3-1-g24b60e8_linux.tar.gz
cp kfctl /usr/bin/准备工作创建PV及PVC使用NFS作为文件存储
创建storageclass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: local-pathnamespace: kubeflow
#provisioner: example.com/nfs
provisioner: kubernetes.io/gce-pd
parameters:type: pd-ssdkubectl create -f storage.ymlyum install nfs-utils rpcbind
#创建NFS挂载目录至少需要四个
mkdir -p /data/nfsvim /etc/exports
#添加上面的挂载目录
/data/nfs 192.168.122.0/24(rw,sync)systemctl restart nfs-server.service创建PV因多个pod挂载文件可能重名所以最好创建多个PV由pod选择挂载至少4个分别供katib-mysql,metadata-mysql,minio,mysql挂载
[rootmaster pv]# cat mysql-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:name: local-path #不同的PVC需要修改
spec:capacity:storage: 200GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RecyclestorageClassName: local-pathnfs:path: /data/nfs #不同的PVC需要修改server: 10.18.5.20创建命名空间 kubeflow-anonymous
kubectl create namespace kubeflow-anonymous下载kubeflow1.0.RC yml文件, https://github.com/kubeflow/manifests/blob/v1.0-branch/kfdef/kfctl_k8s_istio.yaml
[rootmaster 2020-0219]# cat kfctl_k8s_istio.yaml
apiVersion: kfdef.apps.kubeflow.org/v1
kind: KfDef
metadata:clusterName: kubernetescreationTimestamp: nullname: 2020-0219namespace: kubeflow
spec:applications:- kustomizeConfig:parameters:- name: namespacevalue: istio-systemrepoRef:name: manifestspath: istio/istio-crdsname: istio-crds- kustomizeConfig:parameters:- name: namespacevalue: istio-systemrepoRef:name: manifestspath: istio/istio-installname: istio-install- kustomizeConfig:parameters:- name: namespacevalue: istio-systemrepoRef:name: manifestspath: istio/cluster-local-gatewayname: cluster-local-gateway- kustomizeConfig:parameters:- name: clusterRbacConfigvalue: OFFrepoRef:name: manifestspath: istio/istioname: istio- kustomizeConfig:parameters:- name: namespacevalue: istio-systemrepoRef:name: manifestspath: istio/add-anonymous-user-filtername: add-anonymous-user-filter- kustomizeConfig:repoRef:name: manifestspath: application/application-crdsname: application-crds- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: application/applicationname: application- kustomizeConfig:parameters:- name: namespacevalue: cert-managerrepoRef:name: manifestspath: cert-manager/cert-manager-crdsname: cert-manager-crds- kustomizeConfig:parameters:- name: namespacevalue: kube-systemrepoRef:name: manifestspath: cert-manager/cert-manager-kube-system-resourcesname: cert-manager-kube-system-resources- kustomizeConfig:overlays:- self-signed- applicationparameters:- name: namespacevalue: cert-managerrepoRef:name: manifestspath: cert-manager/cert-managername: cert-manager- kustomizeConfig:repoRef:name: manifestspath: metacontrollername: metacontroller- kustomizeConfig:overlays:- istio- applicationrepoRef:name: manifestspath: argoname: argo- kustomizeConfig:repoRef:name: manifestspath: kubeflow-rolesname: kubeflow-roles- kustomizeConfig:overlays:- istio- applicationrepoRef:name: manifestspath: common/centraldashboardname: centraldashboard- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: admission-webhook/bootstrapname: bootstrap- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: admission-webhook/webhookname: webhook- kustomizeConfig:overlays:- istio- applicationparameters:- name: userid-headervalue: kubeflow-useridrepoRef:name: manifestspath: jupyter/jupyter-web-appname: jupyter-web-app- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: spark/spark-operatorname: spark-operator- kustomizeConfig:overlays:- istio- application- dbrepoRef:name: manifestspath: metadataname: metadata- kustomizeConfig:overlays:- istio- applicationrepoRef:name: manifestspath: jupyter/notebook-controllername: notebook-controller- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pytorch-job/pytorch-job-crdsname: pytorch-job-crds- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pytorch-job/pytorch-operatorname: pytorch-operator- kustomizeConfig:overlays:- applicationparameters:- name: usageIdvalue: randomly-generated-id- name: reportUsagevalue: truerepoRef:name: manifestspath: common/spartakusname: spartakus- kustomizeConfig:overlays:- istiorepoRef:name: manifestspath: tensorboardname: tensorboard- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: tf-training/tf-job-crdsname: tf-job-crds- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: tf-training/tf-job-operatorname: tf-job-operator- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: katib/katib-crdsname: katib-crds- kustomizeConfig:overlays:- application- istiorepoRef:name: manifestspath: katib/katib-controllername: katib-controller- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/api-servicename: api-service- kustomizeConfig:overlays:- applicationparameters:- name: minioPvcNamevalue: minio-pv-claimrepoRef:name: manifestspath: pipeline/minioname: minio- kustomizeConfig:overlays:- applicationparameters:- name: mysqlPvcNamevalue: mysql-pv-claimrepoRef:name: manifestspath: pipeline/mysqlname: mysql- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/persistent-agentname: persistent-agent- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/pipelines-runnername: pipelines-runner- kustomizeConfig:overlays:- istio- applicationrepoRef:name: manifestspath: pipeline/pipelines-uiname: pipelines-ui- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/pipelines-viewername: pipelines-viewer- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/scheduledworkflowname: scheduledworkflow- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: pipeline/pipeline-visualization-servicename: pipeline-visualization-service- kustomizeConfig:overlays:- application- istioparameters:- name: adminvalue: johnDoeacme.comrepoRef:name: manifestspath: profilesname: profiles- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: seldon/seldon-core-operatorname: seldon-core-operator- kustomizeConfig:overlays:- applicationparameters:- name: namespacevalue: knative-servingrepoRef:name: manifestspath: knative/knative-serving-crdsname: knative-crds- kustomizeConfig:overlays:- applicationparameters:- name: namespacevalue: knative-servingrepoRef:name: manifestspath: knative/knative-serving-installname: knative-install- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: kfserving/kfserving-crdsname: kfserving-crds- kustomizeConfig:overlays:- applicationrepoRef:name: manifestspath: kfserving/kfserving-installname: kfserving-installrepos:- name: manifestsuri: https://github.com/kubeflow/manifests/archive/master.tar.gzversion: master
status:reposCache:- localPath: ../.cache/manifests/manifests-mastername: manifests
[rootmaster 2020-0219]#
#进入你的kubeflowapp目录 执行
kfctl apply -V -f kfctl_k8s_istio.yaml
#安装过程中需要从GitHub下载配置文件可能会失败失败时重试在kubeflowapp平级目录下会生成kustomize文件夹为防重启时镜像拉取失败需修改所有镜像拉取策略为IfNotPresent 然后再次执行 kfctl apply -V -f kfctl_k8s_istio.yaml
查看运行状态
kubectl get all -n kubeflow通过istio ingress访问kubeflowui
#修改ingeress-gateway访问方式为LoadBalancer
kubectl -n istio-system edit svc istio-ingressgateway
#修改此处为LoadBalancer
selector:app: istio-ingressgatewayistio: ingressgatewayrelease: istio
sessionAffinity: None
type: LoadBalancer保存再次查看该svc信息
[rootmaster 2020-0219]# kubectl -n istio-system get svc istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.98.19.247 10.18.5.30 15020:32230/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31908/TCP,15030:31864/TCP,15031:31315/TCP,15032:30372/TCP,15443:32631/TCP 42h
[rootmaster 2020-0219]#EXTERNAL-IP 即为外部访问地址访问http://10.18.5.30 即可进入kubeflow主页
关于镜像拉取gcr镜像国内无法拉取可以通过如下方式拉取
curl -s https://zhangguanzhang.github.io/bash/pull.sh | bash -s -- 镜像信息若上述方法也无法拉取可以使用阿里云手动构建镜像方式使用海外服务器构建