'Kibana server is not ready yet
I have just installed Kibana 7.3 on RHEL 8. The Kibana service is active (running).
I receive Kibana server is not ready yet
message when i curl to http://localhost:5601.
My Elasticsearch instance is on another server and it is responding with succes to my requests. I have updated the kibana.yml with that
elasticsearch.hosts:["http://EXTERNAL-IP-ADDRESS-OF-ES:9200"]
i can reach to elasticsearch from the internet with response:
{
"name" : "ip-172-31-21-240.ec2.internal",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "y4UjlddiQimGRh29TVZoeA",
"version" : {
"number" : "7.3.1",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "4749ba6",
"build_date" : "2019-08-19T20:19:25.651794Z",
"build_snapshot" : false,
"lucene_version" : "8.1.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
The result of the sudo systemctl status kibana
:
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2019-09-19 12:22:34 UTC; 24min ago
Main PID: 4912 (node)
Tasks: 21 (limit: 4998)
Memory: 368.8M
CGroup: /system.slice/kibana.service
└─4912 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size>
Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0>
Sep 19 12:46:44 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0
the result of "sudo journalctl --unit kibana"
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive >
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect>
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","task_manager"],"pid":1356,"message":"PollError No Living connec>
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive >
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect>
Do you have any idea where the problem is?
Solution 1:[1]
I faced the same issue once when I upgraded Elasticsearch from v6 to v7.
Deleting .kibana*
indexes fixed the problem:
curl --request DELETE 'http://elastic-search-host:9200/.kibana*'
Solution 2:[2]
The error might be related to elastic.hosts
settings. The following steps and worked for me:
- Open
/etc/elasticsearch/elasticsearch.yml
file and check the setting on:
#network.host: localhost
2.Open /etc/kibana/kibana.yml
file and check the setting and check:
#elasticsearch.hosts: ["http://localhost:9200"]
- Check whether both lines have the same setting. If you are using an IP-address for the elasticsearch network host, you need to apply the same for kibana.
The issue was kibana was unable to access elasticsearch locally.
Solution 3:[3]
Probably not the solution for this question
In my case the version from kibana and elasticsearch were not compatible
How i was using docker, i just recreated both but using the same version (7.5.1)
Solution 4:[4]
The issue was kibana was unable to access elasticsearch locally. I think that you have enabled xpack.security plugin at elasticsearch.yml by adding a new line :
xpack.security.enabled : true
if so you need to uncomment these two lines on kibana.yml:
elasticsearch.username = kibana
elasticsearch.password = your-password
after that save the changes
and restart kibana service : sudo systemctl restart kibana.service
Solution 5:[5]
exec that
curl -XDELETE http://localhost:9200/*kibana*
and restart kibana service
service kibana restart
Solution 6:[6]
Refer the discussion on Kibana unabe to connect to elasticsearch on windows
Deleting the .kibana_task_manager_1 index on elasticsearcch solved the issue for me!
Solution 7:[7]
There can be multiple reasons for this. Few things to try
- verify the version compatibility between kibana and elasticsearch and make sure they are compatible according to https://www.elastic.co/support/matrix#matrix_compatibility
- verify that kibana is not trying to load some plugins which are not installed on the master node
- delete
.kibana*
indices as Karthik pointed above
If they don't work, turn on verbose logging from kibana.yml
and restart kibana to get more insights into what may be the cause of this.
Solution 8:[8]
In my case, below changes fixed the problem:
/etc/elasticsearch/elasticsearch.yml
uncomment:
#network.host: localhost
And in
/etc/kibana/kibana.yml
uncomment
#elasticsearch.hosts: ["http://localhost:9200"]
Solution 9:[9]
The reason may be in :
For Linux's docker hosts only. By default
virtual memory is not enough so run the next command as root sysctl -w vm.max_map_count=262144
So if you did not execute it, do it :
sysctl -w vm.max_map_count=262144
If it will help, to use it even after VM reloads, check please this comment : https://stackoverflow.com/a/50371108/1151741
Solution 10:[10]
To overcome this incident, i have deleted and recreated the both servers. I have installed ES and Kibana 7.4 , also i have increased the VM size of ES server to from t1.micro to t2.small. All worked well. In the previous ES instance, the instance was sometimes stopping itself. the vm ram was 1GB consequently i had to limit the JVM heap size and maybe that's the reason the whole problem occured.
Solution 11:[11]
My scenario ended up with the same issue but resulted from using the official Docker containers for both Elasticsearch and Kibana. In particular, the documentation on the Kibana image incorrectly assumes you will have at least one piece of critical knowledge.
In my case, the solution was to be sure that:
- The network tags matched
- The link to the Elasticsearch Docker container uses the
:elasticsearch
tag, not the version tag.
I had made the mistake of using the Elasticsearch container version tag. Here is the corrected format of the docker run
command I needed:
docker run -d --name {Kibana container name to set} --net {network name known to Elasticsearch container} --link {name of Elasticsearch container}:elasticsearch -p 5601:5601 kibana:7.10.1
Considering the command above, if we substitute...
lookeyHere
as the Kibana container namemyNet
as the network namemyPersistence
as the Elasticsearch container name
Then we get the following:
docker run -d --name lookyHere --net myNet --link myPersistence:elasticsearch -p 5601:5601 kibana:7.10.1
That :elasticsearch
right there is critical to getting this working as it sets the #elasticsearch.hosts
value in the /etc/kibana/kibana.yml
file... which you will not be able to easily modify if you are using the official Docker images. @user8832381's answer above gave me the direction I needed towards figuring this out.
Hopefully, this will save someone a few hours.
Solution 12:[12]
One of the issue might be you are running Kibana version which is not compatible with elasticsearch.
Check the bottom of log file using sudo tail /var/log/kibana/kibana.log
I am using Ubuntu. I can see below message in the log file:
{"type":"log","@timestamp":"2021-11-02T15:46:07+04:00","tags":["error","savedobjects-service"],"pid":3801445,"message":"This version of Kibana (v7.15.1) is incompatible with the following Elasticsearch nodes in your cluster: v7.9.3 @ localhost/127.0.0.1:9200 (127.0.0.1)"}
Now you need to install the same version of Kibana as elasticsearch. For example you can see in my system elasticsearch 7.9.3 was installed but Kibana 7.15.1 was installed.
How I have resolved this?
- Removed kibana using
sudo apt-get remove kibana
- Installed kibana 7.9.3 using below commands:
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.9.3-amd64.deb
shasum -a 512 kibana-7.9.3-amd64.deb
sudo dpkg -i kibana-7.9.3-amd64.deb
sudo service kibana start
curl --request DELETE 'http://localhost:9200/.kibana*'
Modify /etc/kibana/kibana.yml file and un-comment below lines:
server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
And open below url in your browser: http://localhost:5601/app/home
Similarly you can check you elasticsearch version and install same version of kibana.
Solution 13:[13]
- In my version was causing I changed the IP address. cause I used docker to start them and use bridge connect them. final I changed my IP address, then restart the docker con. it works for me.
Solution 14:[14]
In my case the server was updated and SELinux was blocking the localhost:9200 connection with a connection refused message.
You can check if it's enabled in /etc/selinux/config
.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow