'How to analyze golang memory?

I wrote a golang program, that uses 1.2GB of memory at runtime.

Calling go tool pprof http://10.10.58.118:8601/debug/pprof/heap results in a dump with only 323.4MB heap usage.

  • What's about the rest of the memory usage?
  • Is there any better tool to explain golang runtime memory?

Using gcvis I get this:

enter image description here

.. and this heap form profile:

enter image description here

Here is my code: https://github.com/sharewind/push-server/blob/v3/broker



Solution 1:[1]

The heap profile shows active memory, memory the runtime believes is in use by the go program (ie: hasn't been collected by the garbage collector). When the GC does collect memory the profile shrinks, but no memory is returned to the system. Your future allocations will try to use memory from the pool of previously collected objects before asking the system for more.

From the outside, this means that your program's memory use will either be increasing, or staying level. What the outside system presents as the "Resident Size" of your program is the number of bytes of RAM is assigned to your program whether it's holding in-use go values or collected ones.

The reason why these two numbers are often quite different are because:

  1. The GC collecting memory has no effect on the outside view of the program
  2. Memory fragmentation
  3. The GC only runs when the memory in use doubles the memory in use after the previous GC (by default, see: http://golang.org/pkg/runtime/#pkg-overview)

If you want an accurate breakdown of how Go sees the memory you can use the runtime.ReadMemStats call: http://golang.org/pkg/runtime/#ReadMemStats

Alternatively, since you are using web-based profiling if you can access the profiling data through your browser at: http://10.10.58.118:8601/debug/pprof/ , clicking the heap link will show you the debugging view of the heap profile, which has a printout of a runtime.MemStats structure at the bottom.

The runtime.MemStats documentation (http://golang.org/pkg/runtime/#MemStats) has the explanation of all the fields, but the interesting ones for this discussion are:

  • HeapAlloc: essentially what the profiler is giving you (active heap memory)
  • Alloc: similar to HeapAlloc, but for all go managed memory
  • Sys: the total amount of memory (address space) requested from the OS

There will still be discrepancies between Sys, and what the OS reports because what Go asks of the system, and what the OS gives it are not always the same. Also CGO / syscall (eg: malloc / mmap) memory is not tracked by go.

Solution 2:[2]

As an addition to @Cookie of Nine's answer, in short: you can try the --alloc_space option.

go tool pprof use --inuse_space by default. It samples memory usage so the result is subset of real one.
By --alloc_space pprof returns all alloced memory since program started.

Solution 3:[3]

I was always confused about the growing residential memory of my Go applications, and finally I had to learn the profiling tools that are present in Go ecosystem. Runtime provides many metrics within a runtime.Memstats structure, but it may be hard to understand which of them can help to find out the reasons of memory growth, so some additional tools are needed.

Profiling environment

Use https://github.com/tevjef/go-runtime-metrics in your application. For instance, you can put this in your main:

import(
    metrics "github.com/tevjef/go-runtime-metrics"
)
func main() {
    //...
    metrics.DefaultConfig.CollectionInterval = time.Second
    if err := metrics.RunCollector(metrics.DefaultConfig); err != nil {
        // handle error
    }
}

Run InfluxDB and Grafana within Docker containers:

docker run --name influxdb -d -p 8086:8086 influxdb
docker run -d -p 9090:3000/tcp --link influxdb --name=grafana grafana/grafana:4.1.0

Set up interaction between Grafana and InfluxDB Grafana (Grafana main page -> Top left corner -> Datasources -> Add new datasource):

enter image description here

Import dashboard #3242 from https://grafana.com (Grafana main page -> Top left corner -> Dashboard -> Import):

enter image description here

Finally, launch your application: it will transmit runtime metrics to the contenerized Influxdb. Put your application under a reasonable load (in my case it was quite small - 5 RPS for a several hours).

Memory consumption analysis

  1. Sys (the synonim of RSS) curve is quite similar to HeapSys curve. Turns out that dynamic memory allocation was the main factor of overall memory growth, so the small amount of memory consumed by stack variables seem to be constant and can be ignored;
  2. The constant amount of goroutines garantees the absence of goroutine leaks / stack variables leak;
  3. The total amount of allocated objects remains the same (there is no point in taking into account the fluctuations) during the lifetime of the process.
  4. The most surprising fact: HeapIdle is growing with the same rate as a Sys, while HeapReleased is always zero. Obviously runtime doesn't return memory to OS at all , at least under the conditions of this test:
HeapIdle minus HeapReleased estimates the amount of memory    
that could be returned to the OS, but is being retained by
the runtime so it can grow the heap without requesting more
memory from the OS.

enter image description hereenter image description here

For those who's trying to investigate the problem of memory consumption I would recommend to follow the described steps in order to exclude some trivial errors (like goroutine leak).

Freeing memory explicitly

It's interesting that the one can significantly decrease memory consumption with explicit calls to debug.FreeOSMemory():

// in the top-level package
func init() {
   go func() {
       t := time.Tick(time.Second)
       for {
           <-t
           debug.FreeOSMemory()
       }
   }()
}

comparison

In fact, this approach saved about 35% of memory as compared with default conditions.

Solution 4:[4]

You can also use StackImpact, which automatically records and reports regular and anomaly-triggered memory allocation profiles to the dashboard, which are available in a historical and comparable form. See this blog post for more details Memory Leak Detection in Production Go Applications

enter image description here

Disclaimer: I work for StackImpact

Solution 5:[5]

Attempting to answer the following original question

Is there any better tool to explain golang runtime memory?

I find the following tools useful

Statsview https://github.com/go-echarts/statsview Statsview is integrated the standard net/http/pprof

Statsviz https://github.com/arl/statsviz

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Cookie of Nine
Solution 2 menghan
Solution 3
Solution 4 logix
Solution 5 Madhu Tomy