PV/PVC

  • PV:persistentVolume (与硬件(nfs、ceph等)对接)

是集群中已经由kubernetes 管理员配置的一个网络存储,集群中的存储资源一个集群资源,即不隶属于任何namespace,PV的数据最终存储在硬件存储,pod不能字节挂载pv,pv需要绑定给pvc并最终由pod挂载pvc使用,PV支持NFS、Ceph,商业存储或云提供商的听特定的存储等,可以自定义PV的类型是块还是文件存储,存储空间大小,访问模式等,PV的生命周期独立于Pod,即当使用PV的Pod被删除时可以对PV的数据没有影响

  • PVC:persistentVolumeClaim(与pod关联,与pv绑定)

PVC是对存储的请求,pod挂载PVC并将数据存储在PVC,而PVC需要绑定到PV才能使用,另外PVC在创建的时候要知道namespace,即Pod要和PVC在同一个namespace下,可以对pvc设置特定的空间大小和访问模式,使用PVC的pod在删除时也可以对PVC中的数据没有影响

pv1

用于实现pod和storage的解耦,这样我们修改storage的时候不需要修改pod

与NFS的区别,可以在PV和PVC 层面实现对存储服务器的空间分配、存储的访问权限管理等。

https://v1-22.docs.kubernetes.io/zh/docs/concepts/storage/persistent-volumes/ #不同存储卷支持的访问模式

pv2

总结:

PV是对底层网络存储的抽象,就是将网络存储定义为一种存储资源,将一个整体的存储资源拆分成多分后给不同的业务使用

PVC是对PV资源的申请调用,pod是通过PVC将数据保存至PV,PV再把数据保存至真正的硬件存储。

pv3

PV参数:
[root@haproxy1 case7-nfs]# kubectl explain pv
1、Capacity: #当前PV空间大小,kubectl explain pv.spec.capacity
2、accessModes: #访问模式, kubectl explain pv.spec.accessModes
   ReadWriteOnce #————————PV只能被单个节点以读写权限挂载RWO
   ReadOnlyMany  #————————PV可以被多个节点挂载但是权限只读ROX
   ReadWriteMany #————————PV可以被多个节点以读写方式挂载使用RWX
3、persistentVolumeReclaimPolicy #删除机制
   Retain  #———————— 删除PV后保持原状,最后需要管理员手动删除
   Recycle #———————— 空间回收,即删除存储卷上所有数据(包括数据和隐藏文件),目前仅支持NFS和hostPath
   Delete  #———————— 自动删除存储卷
4、volumeMode #卷类型
   #定义存储卷使用的文件系统是块还是文件系统,默认是文件系统
5、mountOptions #附加的挂载选项列表,实现更精细的权限控制 ro等

#pv创建后的状态
Available(可用)-- 卷是一个空闲资源,尚未绑定到任何申领;
Bound(已绑定)-- 该卷已经绑定到某申领;
Released(已释放)-- 所绑定的申领已被删除,但是资源尚未被集群回收;
Failed(失败)-- 卷的自动回收操作失败。

PVC参数:

[root@haproxy1 case7-nfs]# kubectl explain pvc
1、accessModes: #访问模式, kubectl explain pv.spec.accessModes
   ReadWriteOnce #————————PV只能被单个节点以读写权限挂载RWO
   ReadOnlyMany  #————————PV可以被多个节点挂载但是权限只读ROX
   ReadWriteMany #————————PV可以被多个节点以读写方式挂载使用RWX
2、resources: #定义PVC创建存储卷的空间大小
3、selector: #标签选择器,选择要绑定的pv
     matchLabels: #匹配标签名称
     matchExperssions: #基于正则表达式匹配
4、volumeMode #卷类型
   #定义PVC使用的文件系统是块还是文件系统,默认是文件系统
5、volumeName #要绑定的pv名称

Volume存储卷类型:

static:静态存储卷,需要在使用前手动创建pv,然后创建pvc并绑定到pv,然后挂载至pod使用,适用于pv和pvc相对于比较固定的业务场景

dynamin:动态存储卷,先创建一个存储类storageclass,后期pod在使用pvc的时候可以通过存储类动态创建pvc,适用于有状态服务集群如Mysql一主多从、zookeeper集群等。

pv4

一、Volume静态存储卷示例

#准备NFS存储
[root@haproxy1 case8-pv-static]# mkdir /data/k8sdata/myserver/myappdata -p
[root@haproxy1 case8-pv-static]# cat /etc/exports
/data/k8sdata 172.16.92.0/24(rw,no_root_squash)
/data/k8sdata/myserver/myappdata 172.16.92.0/24(rw,no_root_squash)
[root@haproxy1 case8-pv-static]# systemctl restart nfs
#——————创建pv
[root@haproxy1 case8-pv-static]# cat 1-myapp-persistentvolume.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myserver-myapp-static-pv
spec:
  #存储卷的大小
  capacity:
    storage: 10Gi
  #存储卷访问模式,单个pod可读写
  accessModes:
    - ReadWriteOnce
  #指定存储类型nfs
  nfs:
    path: /data/k8sdata/myserver/myappdata
    server: 172.16.92.160
[root@haproxy1 case8-pv-static]# kubectl apply -f  1-myapp-persistentvolume.yaml 
persistentvolume/myserver-myapp-static-pv created
#STATUS为Available表示pv已经和nfs关联了
[root@haproxy1 case8-pv-static]# kubectl get pv 
NAME                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
myserver-myapp-static-pv   10Gi       RWO            Retain           Available                                   3s
#——————创建pvc
[root@haproxy1 case8-pv-static]# cat 2-myapp-persistentvolumeclaim.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myserver-myapp-static-pvc
  namespace: myserver
spec:
  volumeName: myserver-myapp-static-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
[root@haproxy1 case8-pv-static]# kubectl  apply -f 2-myapp-persistentvolumeclaim.yaml 
persistentvolumeclaim/myserver-myapp-static-pvc created
#pvc状态为Bound时表示已经和pv绑定成功
[root@haproxy1 case8-pv-static]# kubectl get pvc -n myserver 
NAME                        STATUS   VOLUME                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myserver-myapp-static-pvc   Bound    myserver-myapp-static-pv   10Gi       RWO                           8s
#创建deployment
[root@haproxy1 case8-pv-static]# kubectl apply -f  3-myapp-webserver.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0 
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
#这里指定pvc的名称
            claimName: myserver-myapp-static-pvc 

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30080
  selector:
    app: myserver-myapp-frontend
[root@haproxy1 case8-pv-static]# kubectl get pods -n myserver -o wide 
NAME                                             READY   STATUS    RESTARTS   AGE    IP               NODE            NOMINATED NODE   READINESS GATES
myserver-myapp-deployment-name-fb44b4447-bg54x   1/1     Running   0          3d1h   10.200.36.83     172.16.92.140   <none>           <none>
myserver-myapp-deployment-name-fb44b4447-s7cdg   1/1     Running   0          3d1h   10.200.107.251   172.16.92.142   <none>           <none>
myserver-myapp-deployment-name-fb44b4447-xhlgp   1/1     Running   0          3d1h   10.200.169.172   172.16.92.141   <none>           <none>
#创建成功后进入pod查看是否已经挂载
[root@haproxy1 case8-pv-static]# kubectl exec -it myserver-myapp-deployment-name-fb44b4447-bg54x bash  -n myserver 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@myserver-myapp-deployment-name-fb44b4447-bg54x:/# 
root@myserver-myapp-deployment-name-fb44b4447-bg54x:/# df -h 
Filesystem                                      Size  Used Avail Use% Mounted on
overlay                                         100G  6.4G   94G   7% /
tmpfs                                            64M     0   64M   0% /dev
tmpfs                                           2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/sda1                                       100G  6.4G   94G   7% /etc/hosts
shm                                              64M     0   64M   0% /dev/shm
tmpfs                                           5.3G   12K  5.3G   1% /run/secrets/kubernetes.io/serviceaccount
172.16.92.160:/data/k8sdata/myserver/myappdata   70G  8.1G   62G  12% /usr/share/nginx/html/statics
tmpfs                                           2.8G     0  2.8G   0% /proc/acpi
tmpfs                                           2.8G     0  2.8G   0% /proc/scsi
tmpfs                                           2.8G     0  2.8G   0% /sys/firmware


#测试————————在nfs挂载目录传一个图片,看看pod是否能解析

二、Volume动态存储卷示例

https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

1、创建账户

[root@haproxy1 case9-pv-dynamic-nfs]# cat 1-rbac.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

2、创建storageClass

[root@haproxy1 case9-pv-dynamic-nfs]# cat 2-storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
  - noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
  - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
  mountOptions: "vers=4.1,noresvport,noatime"
  archiveOnDelete: "true"  #删除pod时保留pod数据,默认为false时为不保留数据 

3、创建NFS provisioner

[root@haproxy1 case9-pv-dynamic-nfs]# cat 3-nfs-provisioner.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
spec:
  replicas: 1
  strategy: #部署策略
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 
          #image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2 
          image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.16.92.160
            - name: NFS_PATH
              value: /data/volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.16.92.160
            path: /data/volumes

4、创建PVC

[root@haproxy1 case9-pv-dynamic-nfs]# cat 4-create-pvc.yaml 
# Test PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myserver-myapp-dynamic-pvc
  namespace: myserver
spec:
  storageClassName: managed-nfs-storage #调用的storageclass 名称
  accessModes:
    - ReadWriteMany #访问权限
  resources:
    requests:
      storage: 500Mi #空间大小

5、创建web服务

[root@haproxy1 case9-pv-dynamic-nfs]# cat 5-myapp-webserver.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0 
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-dynamic-pvc 

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30080
  selector:
    app: myserver-myapp-frontend

#验证nfs存储服务器
[root@haproxy1 case9-pv-dynamic-nfs]# ll /data/volumes/
drwxrwxrwx 2 root root 6 518 16:35 myserver-myserver-myapp-dynamic-pvc-pvc-d4c2d565-cbea-4b19-91df-38c7ca1a3717
#进入pod创建nginx的首页文件
[root@haproxy1 case9-pv-dynamic-nfs]# kubectl exec -it myserver-myapp-deployment-name-7c855dc86d-sbb6g -n myserver  bash 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@myserver-myapp-deployment-name-7c855dc86d-sbb6g:/# 
root@myserver-myapp-deployment-name-7c855dc86d-sbb6g:/# df -h 
Filesystem                                                                                                Size  Used Avail Use% Mounted on
overlay                                                                                                   100G  2.0G   99G   2% /
tmpfs                                                                                                      64M     0   64M   0% /dev
tmpfs                                                                                                     2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/sda1                                                                                                 100G  2.0G   99G   2% /etc/hosts
shm                                                                                                        64M     0   64M   0% /dev/shm
172.16.92.160:/data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-d4c2d565-cbea-4b19-91df-38c7ca1a3717   70G  8.1G   62G  12% /usr/share/nginx/html/statics
tmpfs                                                                                                     5.3G   12K  5.3G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                     2.8G     0  2.8G   0% /proc/acpi
tmpfs                                                                                                     2.8G     0  2.8G   0% /proc/scsi
tmpfs                                                                                                     2.8G     0  2.8G   0% /sys/firmware
root@myserver-myapp-deployment-name-7c855dc86d-sbb6g:/# 
root@myserver-myapp-deployment-name-7c855dc86d-sbb6g:/# cd /usr/share/nginx/html/statics/
root@myserver-myapp-deployment-name-7c855dc86d-sbb6g:/usr/share/nginx/html/statics# ls
root@myserver-myapp-deployment-name-7c855dc86d-sbb6g:/usr/share/nginx/html/statics# 
root@myserver-myapp-deployment-name-7c855dc86d-sbb6g:/usr/share/nginx/html/statics# echo nihao >index.html

pv5