'How to use rkt as container runtime instead of docker for kubernetes?
I tried using rktlet(https://github.com/kubernetes-incubator/rktlet/blob/master/docs/getting-started-guide.md)
But when I try to
kubelet --cgroup-driver=systemd \
> --container-runtime=remote \
> --container-runtime-endpoint=/var/run/rktlet.sock \
> --image-service-endpoint=/var/run/rktlet.sock
I am getting the below errors
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I0320 13:10:21.661373 3116 server.go:407] Version: v1.13.4
I0320 13:10:21.663411 3116 plugins.go:103] No cloud provider specified.
W0320 13:10:21.664635 3116 server.go:552] standalone mode, no API client
W0320 13:10:21.669757 3116 server.go:464] No api server defined - no events will be sent to API server.
I0320 13:10:21.669791 3116 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0320 13:10:21.670018 3116 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: []
I0320 13:10:21.670038 3116 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
I0320 13:10:21.670125 3116 container_manager_linux.go:272] Creating device plugin manager: true
I0320 13:10:21.670151 3116 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0320 13:10:21.670254 3116 state_mem.go:84] [cpumanager] updated default cpuset: ""
I0320 13:10:21.670271 3116 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
W0320 13:10:21.672059 3116 util_unix.go:77] Using "/var/run/rktlet.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/rktlet.sock".
W0320 13:10:21.672124 3116 util_unix.go:77] Using "/var/run/rktlet.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/rktlet.sock".
E0320 13:10:21.673168 3116 remote_runtime.go:72] Version from runtime service failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
E0320 13:10:21.673228 3116 kuberuntime_manager.go:184] Get runtime version failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
F0320 13:10:21.673249 3116 server.go:261] failed to run Kubelet: failed to create kubelet: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
How do I create a kube cluster using rkt? Please help.
Solution 1:[1]
That's the way to run Rktlet
. However, Rktlet is still pretty experimental and I believe it's not being actively developed either. The last commit as per this writing was in 05/2018
.
You can try running it the other way as described here or here. Basically, use --container-runtime=rkt
, --rkt-path=PATH_TO_RKT_BINARY
, etc. on the kubelet.
Is there a reason why you are need rkt
? Note that --container-runtime=rkt
is deprecated in the latest Kubernetes but should still work (1.13 as of this writing).
Solution 2:[2]
Not sure about unknown service runtime.v1alpha2.RuntimeService
but unknown service runtime.v1alpha2.ImageService
in my case helps to remove "cri" from disabled_plugins in /etc/containerd/config.toml
config:
#disabled_plugins = ["cri"]
disabled_plugins = []
and restart containerd service systemctl restart containerd.service
Solution 3:[3]
You can check ctr plugin ls
output for some plugin in error
state:
ctr plugin ls
TYPE ID PLATFORMS STATUS
io.containerd.content.v1 content - ok
io.containerd.snapshotter.v1 aufs linux/amd64 skip
io.containerd.snapshotter.v1 btrfs linux/amd64 skip
io.containerd.snapshotter.v1 devmapper linux/amd64 error
io.containerd.snapshotter.v1 native linux/amd64 ok
io.containerd.snapshotter.v1 overlayfs linux/amd64 ok
io.containerd.snapshotter.v1 zfs linux/amd64 skip
io.containerd.metadata.v1 bolt - ok
io.containerd.differ.v1 walking linux/amd64 ok
io.containerd.gc.v1 scheduler - ok
io.containerd.service.v1 introspection-service - ok
io.containerd.service.v1 containers-service - ok
io.containerd.service.v1 content-service - ok
io.containerd.service.v1 diff-service - ok
io.containerd.service.v1 images-service - ok
io.containerd.service.v1 leases-service - ok
io.containerd.service.v1 namespaces-service - ok
io.containerd.service.v1 snapshots-service - ok
io.containerd.runtime.v1 linux linux/amd64 ok
io.containerd.runtime.v2 task linux/amd64 ok
io.containerd.monitor.v1 cgroups linux/amd64 ok
io.containerd.service.v1 tasks-service - ok
io.containerd.internal.v1 restart - ok
io.containerd.grpc.v1 containers - ok
io.containerd.grpc.v1 content - ok
io.containerd.grpc.v1 diff - ok
io.containerd.grpc.v1 events - ok
io.containerd.grpc.v1 healthcheck - ok
io.containerd.grpc.v1 images - ok
io.containerd.grpc.v1 leases - ok
io.containerd.grpc.v1 namespaces - ok
io.containerd.internal.v1 opt - ok
io.containerd.grpc.v1 snapshots - ok
io.containerd.grpc.v1 tasks - ok
io.containerd.grpc.v1 version - ok
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Rico |
Solution 2 | Oleg Neumyvakin |
Solution 3 | Oleg Neumyvakin |