'Kubernetes: How do I delete PV in the correct manner

The stateful set es-data was failing on our test environment and I was asked to delete corresponding PV.

So I deleted the following for es-data: 1) PVC 2) PV They showed as terminating and was left for the weekend. Upon arriving this morning they still showed as terminating so deleted both PVC and PV forcefully. No joy. To fix the whole thing I had to delete the stateful set.

Is this correct if you wanted to delete the PV?



Solution 1:[1]

You can delete the PV using following two commands:

kubectl delete pv <pv_name> --grace-period=0 --force

And then deleting the finalizer using:

kubectl patch pv <pv_name> -p '{"metadata": {"finalizers": null}}'

Solution 2:[2]

Firstly run kubectl patch pv {PVC_NAME} -p '{"metadata":{"finalizers":null}}'

then run kubectl delete pv {PVC_NAME}

Solution 3:[3]

It worked for me if I first delete the pvc, then the pv

kubectl delete pvc data-p-0
kubectl delete pv  <pv-name>  --grace-period=0 --force

Assuming one wants to delete the pvc as well, seems to hang otherwise

Solution 4:[4]

At the beginning be sure that your Reclaim Policy is set up to Delete. After PVC is deleted, PV should be deleted.

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming

If it doesn't help, please check this [closed] Kubernetes PV issue: https://github.com/kubernetes/kubernetes/issues/69697

and try to delete the PV finalizers.

Solution 5:[5]

HINT: PV volumes may be described like pvc-name-of-volume which may be confusing!

  • PV: Persistence Volume
  • PVC: Persistence Volume Clame
  • Pod -> PVC -> PV -> Host Machine

  1. First find the pvs: kubectl get pv -n {namespace}

  2. Then delete the pv in order set status to Terminating

kubectl delete pv {PV_NAME}

  1. Then patch it to set the status of pvc to Lost: kubectl patch pv {PV_NAME} -p '{"metadata":{"finalizers":null}}'

  2. Then get pvc volumes: kubectl get pvc -n storage

  3. Then you can delete the pvc: kubectl delete pvc {PVC_NAME} -n {namespace}


Theoretical example:

** Lets say we have kafka installed in storage namespace

$ kubectl get pv -n storage

$ kubectl delete pv pvc-ccdfe297-44c9-4ca7-b44c-415720f428d1

$ kubectl get pv -n storage (hanging but turns pv status to terminating)

$ kubectl patch pv pvc-ccdfe297-44c9-4ca7-b44c-415720f428d1 -p '{"metadata":{"finalizers":null}}'

$ kubectl get pvc -n storage

kubectl delete pvc data-kafka-0 -n storage

Solution 6:[6]

kubectl delete pv [pv-name]

ksu you have to check about the policy of PV it should not be Reclaim Policy.

Solution 7:[7]

Below command worked for me-

kubectl delete pv (pv_name) --grace-period=0 --force

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Prafull Ladha
Solution 2
Solution 3 Mz A
Solution 4 Tim Abell
Solution 5
Solution 6 Harsh Manvar
Solution 7 kumar