'Simultaneously melt multiple columns in Python Pandas

wondering if pd.melt supports melting multiple columns. I have the below examples trying to have the value_vars as list of lists but i am getting an error:

ValueError: Location based indexing can only have [labels (MUST BE IN THE INDEX), slices of labels (BOTH endpoints included! Can be slices of integers if the index is integers), listlike of labels, boolean] types

Using pandas 0.23.1.

df = pd.DataFrame({'City': ['Houston', 'Austin', 'Hoover'],
                   'State': ['Texas', 'Texas', 'Alabama'],
                   'Name':['Aria', 'Penelope', 'Niko'],
                   'Mango':[4, 10, 90],
                   'Orange': [10, 8, 14], 
                   'Watermelon':[40, 99, 43],
                   'Gin':[16, 200, 34],
                   'Vodka':[20, 33, 18]},
                 columns=['City', 'State', 'Name', 'Mango', 'Orange', 'Watermelon', 'Gin', 'Vodka'])

Desired output:

      City    State       Fruit  Pounds  Drink  Ounces
0  Houston    Texas       Mango       4    Gin    16.0
1   Austin    Texas       Mango      10    Gin   200.0
2   Hoover  Alabama       Mango      90    Gin    34.0
3  Houston    Texas      Orange      10  Vodka    20.0
4   Austin    Texas      Orange       8  Vodka    33.0
5   Hoover  Alabama      Orange      14  Vodka    18.0
6  Houston    Texas  Watermelon      40    nan     NaN
7   Austin    Texas  Watermelon      99    nan     NaN
8   Hoover  Alabama  Watermelon      43    nan     NaN

I tried that and i get the aforementioned error:

df.melt(id_vars=['City', 'State'], 
        value_vars=[['Mango', 'Orange', 'Watermelon'], ['Gin', 'Vodka']],var_name=['Fruit', 'Drink'], 
        value_name=['Pounds', 'Ounces'])


Solution 1:[1]

Use double melt for each catogories and then concat, but because duplicted values add cumcount for unique triples in MultiIndex:

df1 = df.melt(id_vars=['City', 'State'], 
              value_vars=['Mango', 'Orange', 'Watermelon'],
              var_name='Fruit', value_name='Pounds')
df2 = df.melt(id_vars=['City', 'State'], 
              value_vars=['Gin', 'Vodka'], 
              var_name='Drink', value_name='Ounces')

df1 = df1.set_index(['City', 'State', df1.groupby(['City', 'State']).cumcount()])
df2 = df2.set_index(['City', 'State', df2.groupby(['City', 'State']).cumcount()])


df3 = (pd.concat([df1, df2],axis=1)
         .sort_index(level=2)
         .reset_index(level=2, drop=True)
         .reset_index())
print (df3)
      City    State       Fruit  Pounds  Drink  Ounces
0   Austin    Texas       Mango      10    Gin   200.0
1   Hoover  Alabama       Mango      90    Gin    34.0
2  Houston    Texas       Mango       4    Gin    16.0
3   Austin    Texas      Orange       8  Vodka    33.0
4   Hoover  Alabama      Orange      14  Vodka    18.0
5  Houston    Texas      Orange      10  Vodka    20.0
6   Austin    Texas  Watermelon      99    NaN     NaN
7   Hoover  Alabama  Watermelon      43    NaN     NaN
8  Houston    Texas  Watermelon      40    NaN     NaN

Solution 2:[2]

I ran into the same problem again today, found this question, saw that I already upvoted it, and thought it could be nice to make it a thing since it is a recurring problem for me.

As a result, I wrote a multi_melt function that uses the method that jezarel proposed, but works on iterable inputs (the syntax Martin Petrov used). Note that this version "broadcasts" scalar inputs:

from itertools import cycle
import pandas as pd


def is_scalar(obj):
    if isinstance(obj, str):
        return True
    elif hasattr(obj, "__iter__"):
        return False
    else:
        return True


def multi_melt(
    df: pd.DataFrame,
    id_vars=None,
    value_vars=None,
    var_name=None,
    value_name="value",
    col_level=None,
    ignore_index=True,
) -> pd.DataFrame:

    # Note: we don't broadcast value_vars ... that would seem unintuitive
    value_vars = value_vars if not is_scalar(value_vars[0]) else [value_vars]
    var_name = var_name if not is_scalar(var_name) else cycle([var_name])
    value_name = value_name if not is_scalar(value_name) else cycle([value_name])

    melted_dfs = [
        (
            df.melt(
                id_vars,
                *melt_args,
                col_level,
                ignore_index,
            ).pipe(lambda df: df.set_index([*id_vars, df.groupby(id_vars).cumcount()]))
        )
        for melt_args in zip(value_vars, var_name, value_name)
    ]

    return (
        pd.concat(melted_dfs, axis=1)
        .sort_index(level=2)
        .reset_index(level=2, drop=True)
        .reset_index()
    )

Since its not part of the pandas API, you'll have to pipe it, but otherwise it should work like a normal melt that accepts iterables:

Example:

df = pd.DataFrame(
    {
        "City": ["Houston", "Austin", "Hoover"],
        "State": ["Texas", "Texas", "Alabama"],
        "Name": ["Aria", "Penelope", "Niko"],
        "Mango": [4, 10, 90],
        "Orange": [10, 8, 14],
        "Watermelon": [40, 99, 43],
        "Gin": [16, 200, 34],
        "Vodka": [20, 33, 18],
    },
    columns=["City", "State", "Name", "Mango", "Orange", "Watermelon", "Gin", "Vodka"],
)


df.pipe(
    multi_melt,
    id_vars=["City", "State"],
    value_vars=[["Mango", "Orange", "Watermelon"], ["Gin", "Vodka"]],
    var_name=["Fruit", "Drink"],
    value_name=["Pounds", "Ounces"],
)

Result:

      City    State       Fruit  Pounds  Drink  Ounces
0   Austin    Texas       Mango      10    Gin   200.0
1   Hoover  Alabama       Mango      90    Gin    34.0
2  Houston    Texas       Mango       4    Gin    16.0
3   Austin    Texas      Orange       8  Vodka    33.0
4   Hoover  Alabama      Orange      14  Vodka    18.0
5  Houston    Texas      Orange      10  Vodka    20.0
6   Austin    Texas  Watermelon      99    NaN     NaN
7   Hoover  Alabama  Watermelon      43    NaN     NaN
8  Houston    Texas  Watermelon      40    NaN     NaN

Single Melt:

df.pipe(
    multi_melt,
    id_vars=["City", "State"],
    value_vars=["Mango", "Orange", "Watermelon"],
    var_name="Fruit",
    value_name="Pounds",
)
      City    State       Fruit  Pounds
0   Austin    Texas       Mango      10
1   Hoover  Alabama       Mango      90
2  Houston    Texas       Mango       4
3   Austin    Texas      Orange       8
4   Hoover  Alabama      Orange      14
5  Houston    Texas      Orange      10
6   Austin    Texas  Watermelon      99
7   Hoover  Alabama  Watermelon      43
8  Houston    Texas  Watermelon      40

Solution 3:[3]

One option is with pivot_longer from pyjanitor, using a list of regular expressions, and relies on the existing order in the columns (Mango, Gin, Orange, Vodka, Watermelon):

# pip install pyjanitor
import pandas as pd
import janitor

df.pivot_longer(
    index=["City", "State"],
    column_names=slice("Mango", "Vodka"),
    names_to=("Fruit", "Drink"),
    values_to=("Pounds", "Ounces"),
   names_pattern=[r"M|O|W", r"G|V"],
   )

      City    State       Fruit  Pounds  Drink  Ounces
0  Houston    Texas       Mango       4    Gin    16.0
1   Austin    Texas       Mango      10    Gin   200.0
2   Hoover  Alabama       Mango      90    Gin    34.0
3  Houston    Texas      Orange      10  Vodka    20.0
4   Austin    Texas      Orange       8  Vodka    33.0
5   Hoover  Alabama      Orange      14  Vodka    18.0
6  Houston    Texas  Watermelon      40    NaN     NaN
7   Austin    Texas  Watermelon      99    NaN     NaN
8   Hoover  Alabama  Watermelon      43    NaN     NaN

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 FirefoxMetzger
Solution 3