'Regarding compute table statistic in spark

I am working on a pipeline which ingest huge amount of data ..around 80 M records daily. the Main Table where these records are inserted has total around 173M records . Do you think its a good idea to run analyze table to compute table statistic before and after insertion of Millions on records using spark-sql?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source