I am using k8s in minikube under Ubuntu and deployed nginx server. Which i want to access from different level eg from serviceip, nodeip or pod ip and none of t
I'm creating a K8 cluster and I want to display the cluster information in Grafana using Prometheus (as usual). I've followed various documentation that has bee
I have started a dramatiq worker to do some task and after a point, it is just stuck and throws this below-mentioned error after some time. [MainThread] [dramat
I currently have elastic-operator 1.4.0 version running on an AKS cluster. It has a high restarting count of 50 in 95 days for a prod env. Is this normal behavi
I've been using K8S for a year or so and continue to revisit a problem. My app is running in K8S and I now need to debug it. I have a NodeJS App that I'm askin
I want to allow a ServiceAccount in namespace A to access a resource in namespace B. To achieve this I connect the ServiceAccount to a ClusterRole via a Cluster
I have an up&running SolrCloud v8.11 cluster on Kubernetes, with solr-operator. The backup is enabled on S3 bucket. How can I correctly write the request to
I'm running minikube on my local machine, and I can easily connect to my MongoDB pod from both external/internal sources through this setting below WITHOUT the
after watching a view videos on RBAC (role based access control) on kubernetes (of which this one was the most transparent for me), I've followed the steps, how
I have been using kubebuilder for writing custom controller, and aware of Get(), Update(), Delete() methods that it provides. But Now I am looking for a method
I am getting the error dry-run failed, reason: Invalid, error: Deployment.apps "server" is invalid: spec.template.spec.containers[0].env[0].valueFrom: Invalid v
I am playing around with the Horizontal Pod Autoscaler in Kubernetes. I've set the HPA to start up new instances once the average CPU Utilization passes 35%. Ho
My pod metrics stopped working on a local minikube deployment. It appears similar to an issue reported a while back, but I don't see the same error messages in
Amazon EKS requires subnets in at least two Availability Zone. Is applied this redundancy to the whole nodes or only the control plane? If the whole nodes are r
My application uses apache2 web server. Due to restrictions in the kubernetes cluster, I do not have root previliges inside pod. So I have changed default port
In our project, which also uses Kustomize, our base deployment.yaml file looks like this: apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploymen
So I have to API's running on Kubernetes. One has a controller function as such: string filePath = "/blobs/data/runsession/" + folderName; if (!Directory.Exists
We re-organise our namespaces in Kubernetes. We want to move our Persistent volume Claims created by a storageclass from one namespace to another. (Our backup t
We are using the curator service discovery in docker and kubernetes environments. We setup the connection string using the DNS names of the containers/pods.
Kubernetes have cronjob which can be used to schedule jobs periodically https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ Is there a way to r