'How to test the problem size scaling performance of code

I'm running a simple kernel which adds two streams of double-precision complex-values. I've parallelized it using OpenMP with custom scheduling: the slice_indices container contains different indices for different threads.

    for (const auto& index : slice_indices)
    {
        auto* tens1_data_stream = tens1.get_slice_data(index);
        const auto* tens2_data_stream = tens2.get_slice_data(index);
        #pragma omp simd safelen(8)
        for (auto d_index = std::size_t{}; d_index < tens1.get_slice_size(); ++d_index)
        {
            tens1_data_stream[d_index].real += tens2_data_stream[d_index].real;
            tens1_data_stream[d_index].imag += tens2_data_stream[d_index].imag;
        }
    }

The target computer has a Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz with 24 cores, L1 cache 32kB, L2 cache 1MB and L3 cache 33MB. The total memory bandwidth is 115GB/s.

The following is how my code scales with problem size S = N x N x N. enter image description here

Can anybody tell me with the information I've provided if:

  1. it's scaling well, and/or
  2. how I could go about finding out if it's utilizing all the resources which are available to it?

Thanks in advance.

EDIT:

Now I've plotted the performance in GFLOP/s with 24 cores and 48 cores (two NUMA nodes, the same processor). It appears so: enter image description here

And now the strong and weak scaling plots: enter image description here enter image description here

Note: I've measured the BW and it turns out to be 105GB/S.

Question: The meaning of the weird peak at 6 threads/problem size 90x90x90x16 B in the weak scaling plot is not obvious to me. Can anybody clear this?



Solution 1:[1]

Your graph has roughly the right shape: tiny arrays should fit in the L1 cache, and therefore get very high performance. Arrays of a megabyte or so fit in L2 and get lower performance, beyond that you should stream from memory and get low performance. So the relation between problem size and runtime should indeed get steeper with increasing size. However, the resulting graph (btw, ops/sec is more common than mere runtime) should have a stepwise structure as you hit successive cache boundaries. I'd say you don't have enough data points to demonstrate this.

Also, typically you would repeat each "experiment" several times to 1. even out statistical hiccups and 2. make sure that data is indeed in cache.

Since you tagged this "openmp" you should also explore taking a given array size, and varying the core count. You should then get a more or less linear increase in performance, until the processor does not have enough bandwidth to sustain all the cores.

A commenter brought up the concepts of strong/weak scaling. Strong scaling means: given a certain problem size, use more and more cores. That should give you increasing performance, but with diminishing returns as overhead starts to dominate. Weak scaling means: keep the problem size per process/thread/whatever constant, and increase the number of processing elements. That should give you almost linear increasing performance, until -- as I indicated -- you run out of bandwidth. What you seem to do is actually neither of these: you're doing "optimistic scaling": increase the problem size, with everything else constant. That should give you better and better performance, except for cache effects as I indicated.

So if you want to say "this code scales" you have to decide under what scenario. For what it's worth, your figure of 200Gb/sec is plausible. It depends on details of your architecture, but for a fairly recent Intel node that sounds reasonable.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1