K8S 存储卷之 NFS 动态创建 PV
K8S 存储卷之 NFS 动态创建 PV
1.NFS服务器
$ showmount -e 192.168.1.120 |
2.部署文件
$ ls -hl |
3.授权
基于角色的访问控制(RBAC),是一种其于用户的角色控制其对资源访问的方法。
RBAC利用 rbac.authorization.k8s.io
API组实现授权决策,允许管理通过kubernetes API动态配置策略。
文件: 1.rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-client-provisioner
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
4.资源供应
k8s支持两种资源供应(Provisioning)模式:
- 静态模式:集群管理员手工创建许多PV,在定义PV时需要将后端存储的特性进行设置。
- 动态模式:集群管理员无须手工创建PV,而是通过
StorageClass
的设置对后端存储进行描述,标记为某种类型(Class)
。此时要求PVC对存储类型进行声明,系统将自动完成PV的创建及与PVC的绑定。PVC可以声明Class为""
,说明该PVC禁止使用动态模式。
nfs-client-provisioner 是一个Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储
文件: 2.deployment.yaml
apiVersion: v1
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.1.120
- name: NFS_PATH
value: /data/nfs_data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.120
path: /data/nfs_data
5.存储类
修改provisioner的名字,需要与上面的deployment的PROVISIONER_NAME
名字一致。
文件: 3.class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
6.存储声明(PVC资源)
文件: 4.test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
7.使用PVC资源
文件 5.test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: linuxhub/nginx:1.15.5
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
8.创建应用服务
$ kubectl create -f . |
9.查看创建自动创建的PV
$ kubectl get pvc,pv |
10.进入 Pod 验证 NFS 共享文件
Pod 创建测试文件
$ kubectl exec -it test-pod bash |
NFS服务器共享目录查看
- PV以
${namespace}-${pvcName}-${pvName}
的命名格式提供(在NFS服务器上)- PV回收的时候以
archieved-${namespace}-${pvcName}-${pvName
的命名格式(在NFS服务器上)
# ls -hl /data/nfs_data/ |
参考: https://github.com/kubernetes-incubator/external-storage
本文作者 : 泽泽
原文链接 : http://www.linuxhub.cn/2019/03/17/k8s-pvc-nfs-dynamic.html
版权声明 : 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明出处!
知识 & 情怀 | 二者兼得