'Convert columns to rows in Spark SQL

I have some data like this:

ID Value1 Value2 Value40
101 3 520 2001
102 29 530 2020

I want to take this data and convert in to a KV style pair instead

ID ValueVv ValueDesc
101 3 Value1
101 520 Value2
101 2001 Value40

I think it's a pivot, but I can't think of what this needs to look like in code.

I am trying to solve in PySQL but also in a Python DataFrame as I am using Spark.

I could easily, just union each column into an output using SQL, but I was hoping there is a more efficient way?

I've looked at melt as an option and stack. But I'm unsure how to do this effectively.



Solution 1:[1]

It's the opposite of pivot - it's called unpivot.
In Spark, unpivoting is implemented using stack function.

Using PySpark, this is what you could do if you didn't have many columns:

from pyspark.sql import SparkSession, functions as F
spark = SparkSession.builder.getOrCreate()
df = spark.createDataFrame(
    [(101, 3, 520, 2001),
     (102, 29, 530, 2020)],
    ['ID', 'Value1', 'Value2', 'Value40'])

df = df.select(
    "ID",
    F.expr("stack(3, Value1, 'Value1', Value2, 'Value2', Value40, 'Value40') as (ValueVv, ValueDesc)")
)

From your example I see that you may have lots of columns. In this case you may use something like this:

cols_to_unpivot = [f"`{c}`, \'{c}\'" for c in df.columns if c != 'ID']
stack_string = ", ".join(cols_to_unpivot)
df = df.select(
    "ID",
    F.expr(f"stack({len(cols_to_unpivot)}, {stack_string}) as (ValueVv, ValueDesc)")
)

For the example data both versions return

+---+-------+---------+
| ID|ValueVv|ValueDesc|
+---+-------+---------+
|101|      3|   Value1|
|101|    520|   Value2|
|101|   2001|  Value40|
|102|     29|   Value1|
|102|    530|   Value2|
|102|   2020|  Value40|
+---+-------+---------+

Solution 2:[2]

You can use flatmap as follows:

val schema = df.schema
val df2 = df.flatMap(row => {
    val id = row.getString(0)
    (1 until row.size).map(i => {
        (id, schema(i).name, row.getString(i))
    })
}).toDF("ID", "ValueVv", "ValueDesc")

df2.show()
+---+-------+---------+
| ID|ValueVv|ValueDesc|
+---+-------+---------+
|101| Value1|        3|
|101| Value2|      520|
|101|Value40|     2001|
|102| Value1|       29|
|102| Value2|      530|
|102|Value40|     2020|
+---+-------+---------+

or stack function from this link.

Solution 3:[3]

I've done this so far to pivot, but wanting to make it happen not using pandas.. but using spark dataframe only.

import pandas as pd
    
def main():
    
    data={'AnID':[2001,2002,2003,2004],
          'Name':['adam','jane','Sarah','Ryan'], 
          'Age':[23,22,21,24], 
          'Age1':[24,52,51,264], 
          'Age2':[263,262,261,264]}

    df=pd.DataFrame(data)

   #Iterate the DataFrame so that we can pivot the "columns" into Rows
    schema = df.columns  #gives me the names of the columns
       
    df.index[0]
    
    #loop Through the id to pivot on (assume it's the first one in the df)
    
    j = 0
    df2=pd.DataFrame()
    while j < schema.size:
        curvalid = schema[j]
        idname = schema[j] #get each element of the array
        vval = df[idname].values  #Grab all the values for the given "column"
 
    #then get the data for that array element and populate a new object
    
        

        i = 0
        while i < vval.size:
            df3=pd.DataFrame({'DemoDesc' : curvalid, 'DemoID' : vval[i]}, index=[i])
            df2 = df2.append(df3,ignore_index=True)
            i = i + 1;
        j = j + 1;
    print(df2) #print the dataframe
    return;
    
main()

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Lamanus
Solution 3 ZygD