'Execute Multiple SQL Insert Statements in parallel in Snowflake

I have a question about how it works when several SQL statements are executed in parallel in Snowflake.

For example, if I execute 10 insert statements on 10 different tables with the same base table - will the tables be loaded in parallel?



Solution 1:[1]

Concurrency in Snowflake is managed with either multiple warehouses (compute resources) or enabling multi-clustering on a warehouse (one virtual warehouse with more than one cluster of servers).

https://docs.snowflake.com/en/user-guide/warehouses-multicluster.html

I'm working with a customer today that does millions of SQL commands a day, they have many different warehouses and most of these warehouses are set to multi-cluster "auto-scale" mode.

Specifically, for your question, it sounds like you have ten sessions connected, running inserts into ten tables via querying a single base table. I'd probably begin my testing of this with one virtual warehouse, configured with a minimum of one cluster and a maximum of three or four, and then run tests and review the results.

The size of the warehouse I would use would mostly be determined by how large the query is (the SELECT portion), you can start with something like a medium and review the performance and query plans of the inserts to see if that is the appropriate size.

When reviewing the plans, check for queuing time to see if perhaps three or four clusters isn't enough, it probably will be fine.

Your query history will also indicate which "cluster_number" your query ran on, within the virtual warehouse. This is one way to check to see how many clusters were running (the maximum cluster_number), another is to view the warehouses tab in the webUI or to execute the "show warehouses;" command.

Some additional links that might help you:

https://www.snowflake.com/blog/auto-scale-snowflake-major-leap-forward-massively-concurrent-enterprise-applications/

https://community.snowflake.com/s/article/Putting-Snowflake-s-Automatic-Concurrency-Scaling-to-the-Test

https://support.snowflake.net/s/question/0D50Z00009T2QTXSA3/what-is-the-difference-in-scale-out-vs-scale-up-why-is-scale-out-for-concurrency-and-scale-up-is-for-large-queries-

I hope this helps...Rich

p.s. If this (or another) answer helps you, please take a moment to "accept" the answer that helped by clicking on the check mark beside the answer to toggle it from "greyed out" to "filled in".

Solution 2:[2]

Since Copy and Insert statement only write in new partitions they can run in parallel with other Copy or Insert statements.

https://docs.snowflake.com/en/sql-reference/transactions.html

Solution 3:[3]

"https://docs.snowflake.com/en/sql-reference/transactions.html#transaction-commands-and-functions" states that "Most INSERT and COPY statements write only new partitions. Those statements often can run in parallel with other INSERT and COPY operations,..."

I assume that statements cannot run in parallel when they want to insert into the same micro partition. Is that correct or is there another explanation why locks on INSERTs can happen?

Solution 4:[4]

I execute 10 insert statements on 10 different tables with the same base table - will the tables be loaded in parallel?

YES!

Look for multi-table insert in SF https://docs.snowflake.com/en/sql-reference/sql/insert-multi-table.html

Solution 5:[5]

You can execute queries parallelly by just adding a ">" symbol.

for example:

The below statement will submit all the mentioned queries parallelly to snowflake. It will not exit out though if there is any error encountered in any of the queries.

snowsql -o log_level=DEBUG -o exit_on_error=true -q "select 1;>select * from SNOWSQLTABLE;>select 2;>select 3;>insert into TABLE values (1)>;select * from SNOWLTABLE;>select 5;"

The below statement will cause the queries to run one at a time and exit if any error is found.

snowsql -o log_level=DEBUG -o exit_on_error=true -q "select 1;select * from SNOWSQLTABLE;select 2;select 3;insert into SNOQSQLTABLE values (1);select * from SNOWLTABLE;select 5;"

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Rich Murnane
Solution 2 Iqra Ijaz
Solution 3 Jens Gorman
Solution 4 Nadzeya
Solution 5 Tushar Kathpal