'Timeout in Logstash Elasticsearch Filter and Elasticsearch Output
I am using http_poller
input plugin which is scheduled every 15 mins. Based on the http_poller
API response I need to execute Elasticsearch query.
For executing Elasticsearch query, I am using Elasticsearch Filter plugins and it is executed the first time without issue, but after second run it is throwing below error:
[2022-05-09T11:34:46,738][WARN ][logstash.filters.elasticsearch][logs][9c5fb8a0078cad1be396fedd387eb8680d72086b85be9efe15e6893ce2e73332] Failed to query elasticsearch for previous event {:index=>"logs-xx-prod_xx", :error=>"Read timed out"}
Aslo, it is throwing below error for Elasticsearch Output filter from onwards second run:
[2022-05-09T11:35:17,236][WARN ][logstash.outputs.elasticsearch][logs][8850a096b09c55eca7744c74cb4821d3f6e42a3e87a464228013b22ea1f0d576] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [https://elastic:[email protected]:9243/][Manticore::SocketException] Connection reset by peer: socket write error {:url=>https://elastic:[email protected]:9243/, :error_message=>"Elasticsearch Unreachable: [https://elastic:[email protected]:9243/][Manticore::SocketException] Connection reset by peer: socket write error", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2022-05-09T11:35:17,236][ERROR][logstash.outputs.elasticsearch][logs][8850a096b09c55eca7744c74cb4821d3f6e42a3e87a464228013b22ea1f0d576] Attempted to send a bulk request but Elasticsearch appears to be unreachable or down {:message=>"Elasticsearch Unreachable: [https://elastic:[email protected]:9243/][Manticore::SocketException] Connection reset by peer: socket write error", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :will_retry_in_seconds=>2}
[2022-05-09T11:35:19,236][ERROR][logstash.outputs.elasticsearch][logs][8850a096b09c55eca7744c74cb4821d3f6e42a3e87a464228013b22ea1f0d576] Attempted to send a bulk request but there are no living connections in the pool (perhaps Elasticsearch is unreachable or down?) {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError, :will_retry_in_seconds=>4}
[2022-05-09T11:35:19,377][WARN ][logstash.outputs.elasticsearch][logs] Restored connection to ES instance {:url=>"https://elastic:[email protected]:9243/"}
I have configured Logstash pipeline from Kibana as using the centralized pipeline management of the ES 7.16 version.
I have tried below configuration, but seems like not a single configuration is working.
- Changed
Pipeline batch size
value to100
then50
then25
pipeline workers
is set to1
- set
validate_after_inactivity
to0
and try diffrent value as well in Elasticsearch output plugin. - tried various timeout value like
100
,180
,200
,600
etc.
Previously i was setting custom document id using document_id
param that also disable now.
One of the strange behavior I have noticed is that, document count are increased in ES index even after above error.
Also, there is no option to set timeout
in the Elasticsearch filter plugin. Because when I tried to set timeout
it throws error that "timeout param is not supported".
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|