'Kafka connect s3 sink multiple partitions

I have multiple questions about the kafka connect S3 sink connector

1.I was wondering if its possible using the S3 sink of kafka connect to save records with multiple partitions?

for example i have this json record:

{
 "DateA":"UNIXTIMEA",
 "DateB":"UNIXTIMEB",
 "Data":"Some Data"
}

(all fields are top level)

would it be possible to save the data in S3 via the following path:

s3://sometopic/UNIXTIMEA/UNIXTIMEB

2.Can i transform UNIXTIMEA/UNIXTIMEB to a readable date format without changing the record value itself? (for readability reasons )

3.Can i add a prefix to UNIXTIMEA in the S3 path? for example:

s3://DateA=UNIXTIMEA/DateB=UNIXTIMEB/...

I just starting reading the docs and im slowly getting the hang of things, still i haven't really found straight forward answers to these questions.

i would like to do basically all of these actions in my configurations but i doubt i could without a custom partitioner, i would like to confirm this as soon as i can.

Thanks in Advance

C.potato



Solution 1:[1]

The FieldPartioner does take a list of field names

https://github.com/confluentinc/kafka-connect-storage-common/blob/v11.0.5/partitioner/src/main/java/io/confluent/connect/storage/partitioner/FieldPartitioner.java#L34-L40

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1