'Multiple Targets on prometheus
I've configured prometheus on Centos, version details are follows.
prometheus-2.5.0.linux-386
I've added two targets on the prometheus.yml configuration file, all the servers node exporters are running. Config as follows,
scrape_configs:
- job_name: "node"
scrape_interval: "15s"
target_groups:
- targets: ['192.168.x.x:9100','192.168.x.y:9100']
But in the prometheus UI Tragets only showing single node other is not showing. If I remove one node existing node is showing. How can I monitor multiple nodes. But in Grafana Dashboard shows Multiple Series Error.
Solution 1:[1]
I've configured with this configurations on prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'node'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['192.168.x.x:9100']
- targets: ['192.168.x.y:9100']
- targets: ['192.168.x.z:9100']
Solution 2:[2]
for later reference, following config works well on prometheus V2.3.1 version.
- prometheus yml config:
- job_name: 'etcd-stats'
static_configs:
- targets: ['10.18.210.2:2379','10.18.210.199:2379','10.18.210.16:2379']
.......
Solution 3:[3]
You can scrape the multiple targets in prometheus. Try this way:
global:
scrape_interval: 15s # Scrape targets every 15 seconds
scrape_timeout: 15s # Timeout after 15 seconds
# Attach the label monitor=dev-monitor to all scraped time series scraped by this server
labels:
monitor: 'dev-monitor'
scrape_configs:
- job_name: "job-name"
scrape_interval: 10s # Override the default global interval for this job
scrape_timeout: 10s # Override the default global timeout for this job
static_configs:
# First group of scrape targets
- targets: ['localhost:9100', 'localhost:9101']
labels:
group: 'first-group'
# Second group of scrape targets
- targets: ['localhost:9200', 'localhost:9201']
labels:
group: 'second-group'
Hope this helps.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | vish |
Solution 2 | Stuck |
Solution 3 | Tchevass |