'ceph cluster installed in K8S admin password is incorrect
I've installed ceph cluster in K8S using Rook, the service is running fine and PV/PVC is working as expected.
I was able to login to the dashboard once, but after a while the password is incorrect.
I used the command to display the password but it is still incorrect.
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
No obvious error message from the pod
k logs -n rook-ceph rook-ceph-mgr-a-547f75956-c5f9t
debug 2022-02-05T00:09:14.144+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367973: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 767 B/s rd, 1 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:16.144+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367974: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 1.2 KiB/s rd, 2 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:16.784+0000 ffff53657400 0 [progress INFO root] Processing OSDMap change 83..83
debug 2022-02-05T00:09:17.684+0000 ffff44bba400 0 [volumes INFO mgr_util] scanning for idle connections..
debug 2022-02-05T00:09:17.684+0000 ffff44bba400 0 [volumes INFO mgr_util] cleaning up connections: []
debug 2022-02-05T00:09:17.860+0000 ffff3da6c400 0 [volumes INFO mgr_util] scanning for idle connections..
debug 2022-02-05T00:09:17.860+0000 ffff3da6c400 0 [volumes INFO mgr_util] cleaning up connections: []
debug 2022-02-05T00:09:17.988+0000 ffff40b72400 0 [volumes INFO mgr_util] scanning for idle connections..
debug 2022-02-05T00:09:17.988+0000 ffff40b72400 0 [volumes INFO mgr_util] cleaning up connections: []
debug 2022-02-05T00:09:18.148+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367975: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 767 B/s rd, 1 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:20.148+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367976: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 1.2 KiB/s rd, 2 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:21.788+0000 ffff53657400 0 [progress INFO root] Processing OSDMap change 83..83
debug 2022-02-05T00:09:22.144+0000 ffff58661400 0 log_channel(cluster) log [DBG] : pgmap v367977: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 853 B/s rd, 1 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:23.188+0000 ffff5765f400 0 [balancer INFO root] Optimize plan auto_2022-02-05_00:09:23
debug 2022-02-05T00:09:23.188+0000 ffff5765f400 0 [balancer INFO root] Mode upmap, max misplaced 0.050000
debug 2022-02-05T00:09:23.188+0000 ffff5765f400 0 [balancer INFO root] Some objects (0.333333) are degraded; try again later
ubuntu@:~$
No events from the namespace
ubuntu@df1:~$ k get events -n rook-ceph
No resources found in rook-ceph namespace.
It seems like one can use the cephadm command to reset the password, but how can I login to the pod as a root user?
ceph dashboard ac-user-set-password USERNAME PASSWORD
This cephadm command can't be executed as non-root user:
ubuntu@:~$ k exec -it rook-ceph-tools-7884798859-7vcnz -n rook-ceph -- bash
[rook@rook-ceph-tools-7884798859-7vcnz /]$ cephadm
ERROR: cephadm should be run as root
[rook@rook-ceph-tools-7884798859-7vcnz /]$
Solution 1:[1]
I just ran "ceph dashboard ac-user-set-password admin -i 'File with password'" and my password changed. I dont think cephadm works when execing into the pod.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | JustSomeRandom |