Disk Usage
Errors
bash
[TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block]
bash
{
"type": "server",
"timestamp": "2025-02-11T19:04:17,088Z",
"level": "WARN",
"component": "o.e.c.r.a.DiskThresholdMonitor",
"cluster.name": "elasticsearch",
"node.name": "foobar-proxima-elasticsearch-0",
"message": "high disk watermark [90%] exceeded on [1kwvRCVETLaI8TyIZB0BBA][foobar-proxima-elasticsearch-2][/usr/share/elasticsearch/data/nodes/0] free: 100.1gb[6.7%], shards will be relocated away from this node; currently relocating away shards totalling [0] bytes; the node is expected to continue to exceed the high disk watermark when these relocations are complete",
"cluster.uuid": "SyioRY9kSMmhIKLUG68jFg",
"node.id": "kviU-O3ySpeLjEo-gLlibw"
}
Get Statefulset
bash
kubectl get sts | grep elastic
Get Volume Mounts
bash
kubectl get sts xxx -o yaml | grep -A 5 volumeMounts
yaml
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: foobar-elasticsearch
dnsPolicy: ClusterFirst
enableServiceLinks: true
restartPolicy: Always
Enter Pod
bash
kubectl exec -it xxx -- bash
Show Disk Usage
bash
df -h
txt
Filesystem Size Used Avail Use% Mounted on
apple.nfs.foobar.work:/apple/xxx 1.5T 1.2T 197G 87% /usr/share/elasticsearch/data
NFS Server
bash
ping apple.nfs.foobar.work
bash
cat /etc/exports
txt
/data/nfs-fileshare 192.168.192.0/24(rw,async,insecure,fsid=0,no_auth_nlm,no_subtree_check,no_root_squash,no_all_squash)
/data/nfs-fileshare 192.168.81.39/32(rw,async,insecure,fsid=0,no_auth_nlm,no_subtree_check,no_root_squash,no_all_squash)
bash
cd /data/nfs-fileshare/apple
bash
du -sh * | tee 2025-05-23.log
Get PVC Storage Class
bash
kubectl get pvc | grep elastic
txt
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
es-xxx Bound pvc-xxx 30Gi RWO nfs-client 370d
yaml
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs-client
volumeMode: Filesystem
volumeName: pvc-3b5c85fb-b2a8-419b-aa69-47803c340e58
Get Storage Class
bash
kubectl get sc
txt
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage cluster.local/nfs-subdir-external-provisioner Delete Immediate true 376d
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 376d
bash
kubectl get sc -o yaml nfs-client
yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
meta.helm.sh/release-name: nfs-subdir-external-provisioner
meta.helm.sh/release-namespace: nfs
storageclass.kubernetes.io/is-default-class: "true"
creationTimestamp: "2024-05-11T03:05:00Z"
labels:
app: nfs-subdir-external-provisioner
app.kubernetes.io/managed-by: Helm
chart: nfs-subdir-external-provisioner-4.0.18
heritage: Helm
release: nfs-subdir-external-provisioner
name: nfs-client
resourceVersion: "240649"
uid: b4c7c01f-5d29-47db-b2db-edd4d37354f8
parameters:
archiveOnDelete: "true"
provisioner: cluster.local/nfs-subdir-external-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
参数名称 | 说明 |
---|---|
allowVolumeExpansion | 可修改 PVC 的字段 spec.resources.requests.storage 在线扩容 |
provisioner | 储卷供应器,subdir 表示每个 PVC 的数据存储在单独的子目录中 |
archiveOnDelete | 表示当 PVC 删除后,NFS 子目录上的数据会被归档而非直接删除 |
reclaimPolicy | 卷回收策略,Delete 表示删除 PVC 时,关联的存储卷会被删除 |
volumeBindingMode | Immediate 表示 PVC 创建时会立即绑定 PV |