I am trying to compute the the 95th percentile value of a metric in druid. I came across this documentation https://druid.apache.org/docs/latest/development/ext
I'm trying to parse json data in a column with Druid SQL in Superset SQL lab. My table looks like this: id json_scores 0 {"foo": 20, "bar": 10} 1 {"foo": 30, "
I want to write Spark batch results data to the Apache Druid. I know Druid has native batch ingestions such as index_parallel. Druid runs Map-Reduce jobs in the
I have data present in hive tables. I want to apply bunch of transformations before loading that data into druid. So there are ways but I'm not sure about those