'WaitForFirstConsumer PersistentVolumeClaim waiting for first consumer to be created before binding
I setup a new k8s in a single node, which is tainted. But the PersistentVolume
can not be created successfully, when I am trying to create a simple PostgreSQL.
There is some detail information below.
The StorageClass
is copied from the official page: https://kubernetes.io/docs/concepts/storage/storage-classes/#local
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The StatefulSet
is:
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
...
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
About the running StorageClass
:
$ kubectl describe storageclasses.storage.k8s.io
Name: local-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
About the running PersistentVolumeClaim
:
$ kubectl describe pvc
Name: postgres-data-postgres-0
Namespace: default
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=postgres
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer <invalid> (x2 over <invalid>) persistentvolume-controller waiting for first consumer to be created before binding
K8s versions:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Solution 1:[1]
The app is waiting for the Pod, while the Pod is waiting for a PersistentVolume
by a PersistentVolumeClaim
.
However, the PersistentVolume
should be prepared by the user before using.
My previous YAMLs are lack of a PersistentVolume
like this:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-data
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
local:
path: /data/postgres
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteOnce
storageClassName: local-storage
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- postgres
The local path /data/postgres
should be prepared before using.
Kubernetes will not create it automatically.
Solution 2:[2]
I just ran into this myself and was completely thrown for a loop until I realized that the StorageClass
's VolumeBindingMode
was set to WaitForFirstConsumer
vice my intended value of Immediate
. This value is immutable so you will have to:
Get the storage class yaml:
kubectl get storageclasses.storage.k8s.io gp2 -o yaml > gp2.yaml
or you can also just copy the example from the docs here (make sure the metadata names match). Here is what I have configured:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gp2 provisioner: kubernetes.io/aws-ebs parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true mountOptions: - debug volumeBindingMode: Immediate
And delete the old
StorageClass
before recreating it with the newvolumeBindingMode
set toImmediate
.
Note: The EKS clsuter may need perms to create cloud resources like EBS or EFS. Assuming EBS you should be good with arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
.
After doing this you should have no problem creating and using dynamically provisioned PVs.
Solution 3:[3]
For me the problem was mismatched accessModes
fields in the PV and PVC. PVC was requesting RWX
/ReadWriteMany
while PV was offering RWO
/ReadWriteOnce
.
Solution 4:[4]
The accepted answer didn't work for me. I think it's because the app key won't be set before the the StatefulSet's Pods are deployed, preventing the PersistentVolumeClaim to match the nodeSelector (preventing the Pods to start with the error didn't find available persistent volumes to bind.
). To fix this deadlock, I defined one PersistentVolume for each node (this may not be ideal but it worked):
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-data-node1
labels:
type: local
spec:
[…]
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
Solution 5:[5]
In my case, I had claimRef
without specified namespace.
Correct syntax is:
claimRef:
namespace: default
name: my-claim
StatefulSet also prevented initialization, I had to replace it with a deployment
This was a f5g headache
Solution 6:[6]
I'm stuck in this vicious loop myself.
I'm trying to create a kubegres cluster (which relies on dynamic provisioning as per my understanding).
I'm using RKE on a local-servers-like setup.
and I have the same scheduling issue as the one initially mentioned.
noting that the accessmode of the PVC (created by kubegres) is set to nothing as per the below output.
[rke@rke-1 manifests]$ kubectl get pv,PVC
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/local-vol 20Gi RWO Delete Available local-storage 40s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/local-vol-mypostgres-1-0 Pending local-storage 6m42s
persistentvolumeclaim/postgres-db-mypostgres-1-0 Pending local-storage 6m42s
As an update, the issue in my case was that the PVC was not finding a Proper PV which was supposed to be dynamically provisioned. But for local storage classes, this feature is not yet supported therefore I had to use a third-party solution which solved my issue. https://github.com/rancher/local-path-provisioner
Solution 7:[7]
This issue mainly happens with WaitForFirstConsumer when you define the nodeName in the Deployment/Pod specifications. Please make sure you don't define nodeName and hardbind the pod through it. The should be resolved once you remove nodeName.
Solution 8:[8]
I believe this can be a valid message that means that there are no containers started that have volumes that are bound to the persistent volume claim.
I experienced this issue on rancher desktop. It turned out the problem was caused by rancher not running properly after a macOS upgrade. The containers were not starting and would stay in a pending state.
After reseting the rancher desktop (using the UI), the containers were able to start well and the message disappeared.
Solution 9:[9]
waitforfirstconsumer-persistentvolumeclaim i.e. POD which requires this PVC is not scheduled. describe pods may give some more clue. In my case node was not able to schedule this POD since pod limit in node was 110 and deployment was exceeding it. Hope it helps to identify issue faster. increased the pod limit , restart kubelet in node solves it.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 | |
Solution 3 | vladimirror |
Solution 4 | pedroapero |
Solution 5 | |
Solution 6 | |
Solution 7 | ouflak |
Solution 8 | xilef |
Solution 9 | TheFixer |