一、存储卷介绍 pod有生命周期,生命周期结束后pod里的数据会消失(如配置文件,业务数据等)。解决: 我们需要将数据与pod分离,将数据放在专门的存储卷上 pod在k8s集群的节点中是可以调度的, 如果pod挂了被调度到另一个节点,那么数据和pod的联系会中断。解决: 所以我们需要与集群节点分离的存储系统才能实现数据持久化 简单来说: volume提供了在容器上挂载外部存储的能力
二、存储卷的分类 kubernetes支持的存储卷类型非常丰富,使用下面的命令查看:
或者参考: https://kubernetes.io/docs/concepts/storage/
kubernetes支持的存储卷列表如下:
我们将上面的存储卷列表进行简单的分类:
本地存储卷
emptyDir pod删除,数据也会被清除, 用于数据的临时存储
hostPath 宿主机目录映射(本地存储卷)
网络存储卷
NAS类 nfs等
SAN类 iscsi,FC等
分布式存储 glusterfs,cephfs,rbd,cinder等
云存储 aws,azurefile等
三、存储卷的选择 市面上的存储产品种类繁多, 但按应用角度主要分为三类:
文件存储 如:nfs,glusterfs,cephfs等
优点: 数据共享(多pod挂载可以同读同写)
缺点: 性能较差
块存储 如: iscsi,rbd等
优点: 性能相对于文件存储好
缺点: 不能实现数据共享(部分)
对象存储 如: ceph对象存储
优点: 性能好, 数据共享
缺点: 使用方式特殊,支持较少
面对kubernetes支持的形形色色的存储卷,如何选择成了难题。在选择存储时,我们要抓住核心需求:
数据是否需要持久性
数据可靠性 如存储集群节点是否有单点故障,数据是否有副本等
性能
扩展性 如是否能方便扩容,应对数据增长的需求
运维难度 存储的运维难度是比较高的,尽量选择稳定的开源方案或商业产品
成本
总之, 存储的选择是需要考虑很多因素的, 熟悉各类存储产品, 了解它们的优缺点,结合自身需求才能选择合适自己的。
四、本地存储卷之emptyDir
应用场景
实现pod内容器之间数据共享
特点
随着pod被删除,该卷也会被删除
1.创建yaml文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 [root @k8s -master1 ~] apiVersion: v1 kind: Pod metadata: name: volume-emptydir spec: containers: - name: write image: centos imagePullPolicy: IfNotPresent command: ["bash" ,"-c" ,"echo haha > /data/1.txt ; sleep 6000" ] volumeMounts: - name: data mountPath: /data - name: read image: centos imagePullPolicy: IfNotPresent command: ["bash" ,"-c" ,"cat /data/1.txt; sleep 6000" ] volumeMounts: - name: data mountPath: /data volumes: - name: data emptyDir: {}
2.基于yaml文件创建pod
1 2 [root @k8s -master1 ~] pod/volume-emptydir created
3.查看pod启动情况
1 2 3 [root @k8s -master1 ~] NAME READY STATUS RESTARTS AGE volume-emptydir 2 /2 Running 0 15 s
4.查看pod描述信息
1 2 3 4 5 6 7 8 9 10 11 12 [root @k8s -master1 ~] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50 s default-scheduler Successfully assigned default/volume-emptydir to k8s-worker1 Normal Pulling 50 s kubelet Pulling image "centos:centos7" Normal Pulled 28 s kubelet Successfully pulled image "centos:centos7" in 21.544912361 s Normal Created 28 s kubelet Created container write Normal Started 28 s kubelet Started container write Normal Pulled 28 s kubelet Container image "centos:centos7" already present on machine Normal Created 28 s kubelet Created container read Normal Started 28 s kubelet Started container read
5.验证
1 2 3 4 [root @k8s -master1 ~] [root @k8s -master1 ~] haha
五、本地存储卷之hostPath
1.创建yaml文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [root @k8s -master1 ~] apiVersion: v1 kind: Pod metadata: name: volume-hostpath spec: containers: - name: busybox image: busybox imagePullPolicy: IfNotPresent command: ["/bin/sh" ,"-c" ,"echo haha > /data/1.txt ; sleep 600" ] volumeMounts: - name: data mountPath: /data volumes: - name: data hostPath: path: /opt type : Directory
2.基于yaml文件创建pod
1 2 [root @k8s -master1 ~] pod/volume-hostpath created
3.查看pod状态
1 2 3 [root @k8s -master1 ~] volume-hostpath 1 /1 Running 0 29 s 10.224 .194.120 k8s-worker1 <none> <none> 可以看到pod是在k8s-worker1 节点上
4.验证pod所在机器上的挂载文件
1 2 [root @k8s -worker1 ~] haha
六、网络存储卷之nfs 1.搭建nfs服务器
1 2 3 4 5 [root @nfsserver ~] [root @nfsserver ~] /data /nfs *(rw,no_root_squash,sync) [root @nfsserver ~] [root @nfsserver ~]
2.所有node节点安装nfs客户端相关软件包
1 2 3 [root @k8s -worker1 ~] [root @k8s -worker2 ~]
3.验证nfs可用性
1 2 3 4 5 6 7 [root @node1 ~] Export list for 192.168 .10.129 : /data /nfs * [root @node2 ~] Export list for 192.168 .10.129 : /data /nfs *
4.master节点上创建yaml文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [root @k8s -master1 ~] apiVersion: apps/v1 kind: Deployment metadata: name: volume-nfs spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15 -alpine imagePullPolicy: IfNotPresent volumeMounts: - name: documentroot mountPath: /usr/share/nginx/html ports: - containerPort: 80 volumes: - name: documentroot nfs: server: 192.168 .10.129 path: /data /nfs
5.应用yaml创建
1 2 [root @k8s -master1 ~] deployment.apps/nginx-deployment created
6.在nfs服务器共享目录中创建验证文件
7.验证pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root @k8s -master1 ~] volume-nfs -649d848b57 -qg4bz 1 /1 Running 0 10 s volume-nfs -649d848b57 -wrnpn 1 /1 Running 0 10 s [root @k8s -master1 ~] / index.html / volume-nfs / [root @k8s -master1 ~] / index.html / volume-nfs /
七、PV(持久存储卷)与PVC(持久存储卷声明) 7.1 认识pv与pvc kubernetes存储卷的分类太丰富了,每种类型都要写相应的接口与参数才行,这就让维护与管理难度加大。
persistenvolume(PV ) 是配置好的一段存储(可以是任意类型的存储卷)
PersistentVolumeClaim(PVC )是用户pod使用PV的申请请求。
用户不需要关心具体的volume实现细节,只需要关心使用需求。
7.2 pv与pvc之间的关系
pv提供存储资源(生产者)
pvc使用存储资源(消费者)
使用pvc绑定pv
7.3 实现nfs类型pv与pvc 1.编写创建pv的YAML文件
1 2 3 4 5 6 7 8 9 10 11 12 13 [root @k8s -master1 ~] apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs spec: capacity: storage: 1 Gi accessModes: - ReadWriteMany nfs: path: /data /nfs server: 192.168 .10.129
访问模式有3种 参考: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
ReadWriteOnce 单节点读写挂载
ReadOnlyMany 多节点只读挂载
ReadWriteMany 多节点读写挂载
cephfs存储卷3种类型都支持, 我们要实现多个nginx跨节点之间的数据共享,所以选择ReadWriteMany模式。
2.创建pv并验证
1 2 [root @k8s -master1 ~] persistentvolume/pv-nfs created
1 2 3 [root @k8s -master1 ~] NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-nfs 1 Gi RWX Retain Available 81 s
说明:
RWX为ReadWriteMany的简写
Retain是回收策略
3.编写创建pvc的YAML文件
1 2 3 4 5 6 7 8 9 10 11 [root @k8s -master1 ~] apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nfs spec: accessModes: - ReadWriteMany resources: requests: storage: 1 Gi
4.创建pvc并验证
1 2 [root @k8s -master1 ~] persistentvolumeclaim/pvc-nfs created
1 2 3 [root @k8s -master1 ~] NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-nfs Bound pv-nfs 1 Gi RWX 38 s
注意: STATUS必须为Bound状态(Bound状态表示pvc与pv绑定OK)
5.编写deployment的YMAL
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 [root @k8s -master1 ~] apiVersion: apps/v1 kind: Deployment metadata: name: deploy-nginx -nfs spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.15 -alpine imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: www mountPath: /usr/share/nginx/html volumes: - name: www persistentVolumeClaim: claimName: pvc-nfs
6.应用YAML创建deploment
1 2 [root @k8s -master1 ~] deployment.apps/deploy-nginx -nfs created
7.验证pod
1 2 3 [root @k8s -master1 ~] deploy-nginx -nfs -6f9bc4546c -gbzcl 1 /1 Running 0 1 m46sdeploy-nginx -nfs -6f9bc4546c -hp4cv 1 /1 Running 0 1 m46s
8.验证pod内卷的数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [root @k8s -master1 ~] / index.html / volume-nfs / [root @k8s -master1 ~] / index.html / volume-nfs /
7.4 subpath使用 subpath是指可以把相同目录中不同子目录挂载到容器中不同的目录中使用的方法。以下通过案例演示:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 编辑文件 编辑后查看,保持内容一致即可 apiVersion: v1 kind: Pod metadata: name: pod1 spec: containers: - name: c1 image: busybox command: ["/bin/sleep" ,"100000" ] volumeMounts: - name: data mountPath: /opt/data1 subPath: data1 - name: data mountPath: /opt/data2 subPath: data2 volumes: - name: data persistentVolumeClaim: claimName: pvc-nfs
1 2 3 执行文件,创建pod pod/pod1 created
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 编辑文件 查看编辑后文件,保持内容一致即可 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nfs spec: accessModes: - ReadWriteMany resources: requests: storage: 1 Gi
1 2 3 执行文件,创建pvc persistentvolumeclaim/pvc-nfs created
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 编辑文件 查看编辑后文件,保持内容一致,注意修改nfs服务器及其共享的目录 apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs spec: capacity: storage: 1 Gi accessModes: - ReadWriteMany nfs: path: /sdb server: 192.168 .10.214
1 2 3 执行文件,创建pv persistentvolume/pv-nfs created
1 2 3 在nfs服务器查看pod中目录是否自动添加到nfs服务器/sdb目录中 [root @nfsserver ~] data1 data2
八、存储的动态供给 8.1 什么是动态供给 每次使用存储要先创建pv, 再创建pvc,真累! 所以我们可以实现使用存储的动态供给特性。
静态存储需要用户申请PVC时保证容量和读写类型与预置PV的容量及读写类型完全匹配, 而动态存储则无需如此.
管理员无需预先创建大量的PV作为存储资源
Kubernetes从1.4版起引入了一个新的资源对象StorageClass,可用于将存储资源定义为具有显著特性的类(Class)而不是具体
的PV。用户通过PVC直接向意向的类别发出申请,匹配由管理员事先创建的PV,或者由其按需为用户动态创建PV,这样就免去
了需要先创建PV的过程。
8.2 使用NFS文件系统创建存储动态供给 PV对存储系统的支持可通过其插件来实现,目前,Kubernetes支持如下类型的插件。
官方地址:https://kubernetes.io/docs/concepts/storage/storage-classes/
官方插件是不支持NFS动态供给的,但是我们可以用第三方的插件来实现
第三方插件地址: https://github.com/kubernetes-retired/external-storage
1.下载并创建storageclass
1 2 3 [root @k8s -master1 ~] [root @k8s -master1 ~]
1 2 3 4 5 6 7 8 [root @k8s -master1 ~] apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs .io/nfs-subdir -external -provisioner parameters: archiveOnDelete: "false"
1 2 [root @k8s -master1 ~] storageclass.storage.k8s.io/managed-nfs -storage created
1 2 3 4 5 6 7 [root @k8s -master1 ~] NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client k8s-sigs .io/nfs-subdir -external -provisioner Delete Immediate false 10 s
2.下载并创建rbac
因为storage自动创建pv需要经过kube-apiserver,所以需要授权。
1 2 3 [root @k8s -master1 ~] [root @k8s -master1 ~]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 [root @k8s -master1 ~] apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client -provisioner namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client -provisioner -runner rules: - apiGroups: ["" ] resources: ["persistentvolumes" ] verbs: ["get" , "list" , "watch" , "create" , "delete" ] - apiGroups: ["" ] resources: ["persistentvolumeclaims" ] verbs: ["get" , "list" , "watch" , "update" ] - apiGroups: ["storage.k8s.io" ] resources: ["storageclasses" ] verbs: ["get" , "list" , "watch" ] - apiGroups: ["" ] resources: ["events" ] verbs: ["create" , "update" , "patch" ] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs -client -provisioner subjects: - kind: ServiceAccount name: nfs-client -provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client -provisioner -runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking -nfs -client -provisioner namespace: default rules: - apiGroups: ["" ] resources: ["endpoints" ] verbs: ["get" , "list" , "watch" , "create" , "update" , "patch" ] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking -nfs -client -provisioner namespace: default subjects: - kind: ServiceAccount name: nfs-client -provisioner namespace: default roleRef: kind: Role name: leader-locking -nfs -client -provisioner apiGroup: rbac.authorization.k8s.io
1 2 3 4 5 6 [root @k8s -master1 ~] serviceaccount/nfs-client -provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client -provisioner -runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs -client -provisioner created role.rbac.authorization.k8s.io/leader-locking -nfs -client -provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking -nfs -client -provisioner created
3.创建动态供给的deployment
需要一个deployment来专门实现pv与pvc的自动创建
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 [root @k8s -master1 ~] apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client -provisioner spec: replicas: 1 strategy: type : Recreate selector: matchLabels: app: nfs-client -provisioner template: metadata: labels: app: nfs-client -provisioner spec: serviceAccount: nfs-client -provisioner containers: - name: nfs-client -provisioner image: registry.cn-beijing .aliyuncs.com/pylixm/nfs-subdir -external -provisioner :v4.0.0 volumeMounts: - name: nfs-client -root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs .io/nfs-subdir -external -provisioner - name: NFS_SERVER value: 192.168 .10.129 - name: NFS_PATH value: /data /nfs volumes: - name: nfs-client -root nfs: server: 192.168 .10.129 path: /data /nfs
1 2 [root @k8s -master1 ~] deployment.apps/nfs-client -provisioner created
1 2 [root @k8s -master1 ~] nfs-client -provisioner -5b5ddcd6c8 -b6zbq 1 /1 Running 0 34 s
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 测试存储动态供给是否可用 --- apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 2 template: metadata: labels: app: nginx spec: imagePullSecrets: - name: huoban-harbor terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx:latest ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "nfs-client" resources: requests: storage: 1 Gi
1 2 3 4 5 [root @k8s -master1 nfs ] NAME READY STATUS RESTARTS AGE nfs-client -provisioner -9c988bc46 -pr55n 1 /1 Running 0 95 s web-0 1 /1 Running 0 95 s web-1 1 /1 Running 0 61 s
1 2 [root @nfsserver ~] default-www -web -0 -pvc -c4f7aeb0 -6ee9 -447f -a893 -821774b8d11f default-www -web -1 -pvc -8b8a4d3d -f75f -43af -8387 -b7073d07ec01
扩展: