'Executing Stored Procedure in Databricks when using Azure Apache Spark connector

Following example from Azure team is using Apache Spark connector for SQL Server to write data to a table.

Question: How can we execute a Stored Procedure in an Azure Databricks when using Apache Spark Connector?

    server_name = "jdbc:sqlserver://{SERVER_ADDR}"
    database_name = "database_name"
    url = server_name + ";" + "databaseName=" + database_name + ";"
    
    table_name = "table_name"
    username = "username"
    password = "password123!#" # Please specify password here
    
    try:
      df.write \
        .format("com.microsoft.sqlserver.jdbc.spark") \
        .mode("overwrite") \
        .option("url", url) \
        .option("dbtable", table_name) \
        .option("user", username) \
        .option("password", password) \
        .save()
    except ValueError as error :
        print("Connector write failed", error)


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source