'Precision gets lost for big number in telegraf

Precision gets lost for big number. I am using tail input plugin to read file and data inside a file is in json format. Below is the configuration

[inputs.tail]]
    files = ["E:/Telegraph/MSTCIVRRequestLog_*.json"]
    from_beginning = true
    name_override = "tcivrrequest"
    data_format = "json"
    json_strict = true

[[outputs.file]]
    files = ["E:/Telegraph/output.json"]
    data_format = "json"

Input file contains

{"RequestId":959011990586458245}

Expected Output

{"fields":{"RequestId":959011990586458245},"name":"tcivrrequest","tags":{},"timestamp":1632994599}

Actual Output

{"fields":{"RequestId":959011990586458200},"name":"tcivrrequest","tags":{},"timestamp":1632994599}

Number 959011990586458245 converted into 959011990586458200(check last few digits).

Already Tried Below things but not worked

json_string_fields = ["RequestId"]

[[processors.converter]] [processors.converter.fields] string = [""RequestId""]"

precision = "1s"

json_int64_fields = ["RequestId"]

character_encoding = "utf-8"

json_strict = true



Solution 1:[1]

I was able to reproduce this with the json parser as well. My suggestion would be to move to the json_v2 parser with a config like the following:

[[inputs.file]]
    files = ["metrics.json"]
    data_format = "json_v2"
    [[inputs.file.json_v2]]
        [[inputs.file.json_v2.field]]
        path = "RequestId"
        type = "int"

I was able to get a result as follows:

file RequestId=959011990586458245i 1651181595000000000

The newer parser is generally more accurate and flexible for simple cases like the one you provided.

Thanks!

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 powersj