'Apache Iceberg table format to ADLS / azure data lake

I am trying to find some integration to use iceberg table format on adls /azure data lake to perform crud operations. Is it possible to not use any other computation engine like spark to use it on azure. I think aws s3 supports this usecase. Any thoughts on it.



Solution 1:[1]

spark can use Iceberg with the abfs connector, hdfs, even local files. you just need the classpath and authentication right

Solution 2:[2]

A bit late to the party but Starburst Galaxy deploys Trino on any Azure region and has a Great Lakes connector that supports Hive (parquet, orc, csv,etc..), Delta Lake and Iceberg. https://blog.starburst.io/introducing-great-lakes-connectivity-for-starburst-galaxy

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 stevel
Solution 2 Tom N