'How to create a CSV file with PySpark?

I have a short question about pyspark write.

read_jdbc = spark.read \
    .format("jdbc") \
    .option("url", "jdbc:postgresql:dbserver") \
    .option("dbtable", "schema.tablename") \
    .option("user", "username") \
    .option("password", "password") \
    .load()

read_jdbc.show() 

When I command show, it works perfectly,

However, when I try to write,

read_jdbc.write.csv("some/aaa")

it only creates aaa folder, but the CSV is not created.

I also tried

read_jdbc.write.jdbc(mysql_url,table="csv_test",mode="append")

This does not work either. Any help?



Solution 1:[1]

you can write dataframe to csv

df.write.csv("file-path")

or

df.write.format("csv").save("file-path")

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Sudhin