'Datadog wrong displaying of http.server.requests.count metric

In my project we use spring-boot2 with actuator and datadog metrics - package io.micrometer:micrometer-registry-datadog.

Metrics are sent to datadog by http API.

I've created time series chart in datadog for http.server.requests.count to see how many requests per seconds we have.

I see that depending on chosen time window in datadog I see completely different values on charts.

E.g. for 15 minutes I see around 300req/s

enter image description here

while for 1 day it is around 50req/s.

enter image description here

I've experimented with http.server.requests.count in datadog, to change it to be treated as rate occurrence/sec, but it didn't help.

I've added @Timed annotation on controllers, and customized datadog step to 30s. There rest is default.

Do you have any ideas why it is happening? Where I made a mistake?



Solution 1:[1]

Perhaps what you see on a graph is an average value. Maybe there is a spike in requests during those 15 mins that is "flattened" on the bigger picture due to a much wider time interval. There is a possibility that checking the 1-hour window would shed some light on the situation. Thanks

Solution 2:[2]

The reason is that you are aggregating by rate. As you change the time frame you see different values, change rate to sum and you should see consistent data.

In json format it would look like this:

{
    "query": "sum:http.server.requests.count{!uri:/actuator/health,$Environment}.as_count()",
    "data_source": "metrics",
    "name": "query"
}

Also would recommend that you read these two pages to understand how query's and data types work in datadog:

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 MissingSemiColon