'Data Factory/Synapse: How to merge many files?
After generating ~90 different 100 mb gzip'd CSV files, I want to merge them all into a single file. Using the built-in merge option for a data copy process, it seems that it would take well over a dozen hours to do this operation.
https://i.stack.imgur.com/yymnW.png
How can I merge many files in blob/ADLS storage quickly with Data Factory/Synapse?
Solution 1:[1]
You could try a 2 step process.
- Merge all files from CSV into a Parquet format.
- Copy that Parquet file into a CSV file.
Writes into Parquet are generally quick (provided you have clean data like no spaces in column names) and they are smaller in size.
Edit - ADF Data Flow is another option. If that is still not fast enough then you might have to create a Spark Notebook in synapse and write spark code. Use a spark pool size with a balance between time and cost.
Solution 2:[2]
Easy, just convert the objects to a pandas dataframe and then do the merge.
Step #1:
df1= df1.select("*").toPandas()
df2= df2.select("*").toPandas()
Step #2:
result = pd.concat([df1, df2], axis=1)
See this link for more info.
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html
Also, here's another technique to consider.
https://www.sqlservercentral.com/articles/merge-multiple-files-in-azure-data-factory
https://markcarrington.dev/2020/11/27/combining-data-with-azure-data-factory/
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 |