'Kubernetes Prometheus CrashLoopBackOff / OOMKilled Puzzle

Periodically I see the container Status: terminated - OOMKilled (exit code: 137)

But it's scheduled to the node with plenty of memory

$ k get statefulset -n metrics 
NAME                      READY   AGE
prometheus                0/1     232d


$ k get po -n metrics
prometheus-0  1/2     CrashLoopBackOff   147        12h

$ k get events  -n metrics
LAST SEEN   TYPE      REASON    OBJECT             MESSAGE
10m         Normal    Pulled    pod/prometheus-0   Container image "prom/prometheus:v2.11.1" already present on machine
51s         Warning   BackOff   pod/prometheus-0   Back-off restarting failed container


k logs -f prometheus-0 -n metrics --all-containers=true

level=warn ts=2020-08-22T20:48:02.302Z caller=main.go:282 deprecation_notice="'storage.tsdb.retention' flag is deprecated use 'storage.tsdb.retention.time' instead."
level=info ts=2020-08-22T20:48:02.302Z caller=main.go:329 msg="Starting Prometheus" version="(version=2.11.1, branch=HEAD, revision=e5b22494857deca4b806f74f6e3a6ee30c251763)"
level=info ts=2020-08-22T20:48:02.302Z caller=main.go:330 build_context="(go=go1.12.7, user=root@d94406f2bb6f, date=20190710-13:51:17)"
level=info ts=2020-08-22T20:48:02.302Z caller=main.go:331 host_details="(Linux 4.14.186-146.268.amzn2.x86_64 #1 SMP Tue Jul 14 18:16:52 UTC 2020 x86_64 prometheus-0 (none))"
level=info ts=2020-08-22T20:48:02.302Z caller=main.go:332 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2020-08-22T20:48:02.303Z caller=main.go:333 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2020-08-22T20:48:02.307Z caller=main.go:652 msg="Starting TSDB ..."
level=info ts=2020-08-22T20:48:02.307Z caller=web.go:448 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2020-08-22T20:48:02.311Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1597968000000 maxt=1597975200000 ulid=01EG7FAW5PE9ARVHJNKW1SJXRK
level=info ts=2020-08-22T20:48:02.312Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1597975200000 maxt=1597982400000 ulid=01EG7P6KDPXPFVPSMBXBDF48FQ
level=info ts=2020-08-22T20:48:02.313Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1597982400000 maxt=1597989600000 ulid=01EG7X2ANPN30M8ET2S8EPGKEA
level=info ts=2020-08-22T20:48:02.314Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1597989600000 maxt=1597996800000 ulid=01EG83Y1XPXRWRRR2VQRNFB37F
level=info ts=2020-08-22T20:48:02.314Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1597996800000 maxt=1598004000000 ulid=01EG8ASS5P9J1TBZW2P4B2GV7P
level=info ts=2020-08-22T20:48:02.315Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1598004000000 maxt=1598011200000 ulid=01EG8HNGDXMYRH0CGWNHKECCPR
level=info ts=2020-08-22T20:48:02.316Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1598011200000 maxt=1598018400000 ulid=01EG8RH7NPHSC5PAGXCMN8K9HE
level=info ts=2020-08-22T20:48:02.317Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1598018400000 maxt=1598025600000 ulid=01EG8ZCYXNABK8FD3ZGFSQ9NGQ
level=info ts=2020-08-22T20:48:02.317Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1598025600000 maxt=1598032800000 ulid=01EG968P5T7SJTVDCZGN6D5YW2
level=info ts=2020-08-22T20:48:02.317Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1598032800000 maxt=1598040000000 ulid=01EG9D4DDPR9SE62C0XNE0Z64C
level=info ts=2020-08-22T20:48:02.318Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1598040000000 maxt=1598047200000 ulid=01EG9M04NYMAFACVCMDD2RF11W
level=info ts=2020-08-22T20:48:02.319Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1598047200000 maxt=1598054400000 ulid=01EG9TVVXNJ7VCDXQNNK2BTZAE
level=info ts=2020-08-22T20:48:02.320Z caller=repair.go:59 component=tsdb msg="found healthy block" mint=1598054400000 maxt=1598061600000 ulid=01EGA1QK5PHHZ6P6TNPHDWSD81

k describe statefulset prometheus -n metrics
Name:               prometheus
Namespace:          metrics
CreationTimestamp:  Fri, 03 Jan 2020 04:33:58 -0800
Selector:           app=prometheus
Labels:             <none>
Annotations:        <none>
Replicas:           1 desired | 1 total
Update Strategy:    RollingUpdate
  Partition:        824644121032
Pods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=prometheus
  Annotations:      checksum/config: 6982e2d83da89ab6fa57e1c2c8a217bb5c1f5abe13052a171cd8d5e238a40646
  Service Account:  prometheus
  Containers:
   prometheus-configmap-reloader:
    Image:      jimmidyson/configmap-reload:v0.1
    Port:       <none>
    Host Port:  <none>
    Args:
      --volume-dir=/etc/prometheus
      --webhook-url=http://localhost:9090/-/reload
    Environment:  <none>
    Mounts:
      /etc/prometheus from prometheus (ro)
   prometheus:
    Image:      prom/prometheus:v2.11.1
    Port:       9090/TCP
    Host Port:  0/TCP
    Args:
      --config.file=/etc/prometheus/prometheus.yml
      --web.enable-lifecycle
      --web.enable-admin-api
      --storage.tsdb.path=/prometheus/data
      --storage.tsdb.retention=1d
    Limits:
      memory:     1Gi
    Liveness:     http-get http://:9090/-/healthy delay=180s timeout=1s period=120s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/prometheus from prometheus (rw)
      /etc/prometheus-alert-rules from prometheus-alert-rules (rw)
      /prometheus/data from prometheus-data-storage (rw)
  Volumes:
   prometheus:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus
    Optional:  false
   prometheus-alert-rules:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-alert-rules
    Optional:  false
Volume Claims:
  Name:          prometheus-data-storage
  StorageClass:  prometheus
  Labels:        <none>
  Annotations:   <none>
  Capacity:      20Gi
  Access Modes:  [ReadWriteOnce]
Events:          <none>

What could be the reason?



Solution 1:[1]

Periodically I see the container Status: terminated - OOMKilled (exit code: 137)

But it's scheduled to the node with plenty of memory

As you may have already seen it's evident you are hitting more than the 1GB configured. The answer probably lies in how you are using Prometheus and what usage limits you are hitting for 1GB. Some of the things you can look at:

  • Number of Time Series
  • Average Labels Per Time Series
  • Number of Unique Label Pairs
  • Scrape Interval (s)
  • Bytes per Sample

You can find a memory calculator for the usage above ? here.

??

Solution 2:[2]

The 1Gi memory limit for Prometheus pod is quite low for Kubernetes monitoring, where millions of metrics are scraped from thousands of targets (pods, nodes, endpoints, etc.).

The recommendation is to raise the memory limit for Prometheus pod until it stops crashing with out of memory error.

It is recommended setting up monitoring for the Prometheus itself - it exports own metrics at http://prometheus-host:9090/metrics url - see, for example, http://demo.robustperception.io:9090/metrics .

Prometheus memory usage can be decreased in the following ways:

P.S. There are alternative Prometheus-like solutions, which can use lower amounts of memory when scraping the same set of targets. See, for example, vmagent and VictoriaMetrics.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Rico
Solution 2 valyala