'Convert large csv to hdf5

I have a 100M line csv file (actually many separate csv files) totaling 84GB. I need to convert it to a HDF5 file with a single float dataset. I used h5py in testing without any problems, but now I can't do the final dataset without running out of memory.

How can I write to HDF5 without having to store the whole dataset in memory? I'm expecting actual code here, because it should be quite simple.

I was just looking into pytables, but it doesn't look like the array class (which corresponds to a HDF5 dataset) can be written to iteratively. Similarly, pandas has read_csv and to_hdf methods in its io_tools, but I can't load the whole dataset at one time so that won't work. Perhaps you can help me solve the problem correctly with other tools in pytables or pandas.



Solution 1:[1]

Use append=True in the call to to_hdf:

import numpy as np
import pandas as pd

filename = '/tmp/test.h5'

df = pd.DataFrame(np.arange(10).reshape((5,2)), columns=['A', 'B'])
print(df)
#    A  B
# 0  0  1
# 1  2  3
# 2  4  5
# 3  6  7
# 4  8  9

# Save to HDF5
df.to_hdf(filename, 'data', mode='w', format='table')
del df    # allow df to be garbage collected

# Append more data
df2 = pd.DataFrame(np.arange(10).reshape((5,2))*10, columns=['A', 'B'])
df2.to_hdf(filename, 'data', append=True)

print(pd.read_hdf(filename, 'data'))

yields

    A   B
0   0   1
1   2   3
2   4   5
3   6   7
4   8   9
0   0  10
1  20  30
2  40  50
3  60  70
4  80  90

Note that you need to use format='table' in the first call to df.to_hdf to make the table appendable. Otherwise, the format is 'fixed' by default, which is faster for reading and writing, but creates a table which can not be appended to.

Thus, you can process each CSV one at a time, use append=True to build the hdf5 file. Then overwrite the DataFrame or use del df to allow the old DataFrame to be garbage collected.


Alternatively, instead of calling df.to_hdf, you could append to a HDFStore:

import numpy as np
import pandas as pd

filename = '/tmp/test.h5'
store = pd.HDFStore(filename)

for i in range(2):
    df = pd.DataFrame(np.arange(10).reshape((5,2)) * 10**i, columns=['A', 'B'])
    store.append('data', df)

store.close()

store = pd.HDFStore(filename)
data = store['data']
print(data)
store.close()

yields

    A   B
0   0   1
1   2   3
2   4   5
3   6   7
4   8   9
0   0  10
1  20  30
2  40  50
3  60  70
4  80  90

Solution 2:[2]

This should be possible with PyTables. You'll need to use the EArray class though.

As an example, the following is a script I wrote to import chunked training data stored as .npy files into a single .h5 file.

import numpy
import tables
import os

training_data = tables.open_file('nn_training.h5', mode='w')
a = tables.Float64Atom()
bl_filter = tables.Filters(5, 'blosc')   # fast compressor at a moderate setting

training_input =  training_data.create_earray(training_data.root, 'X', a,
                                             (0, 1323), 'Training Input',
                                             bl_filter, 4000000)
training_output = training_data.create_earray(training_data.root, 'Y', a,
                                             (0, 27), 'Training Output',
                                             bl_filter, 4000000)

for filename in os.listdir('input'):
    print "loading {}...".format(filename)
    a = numpy.load(os.path.join('input', filename))
    print "writing to h5"
    training_input.append(a)

for filename in os.listdir('output'):
    print "loading {}...".format(filename)
    training_output.append(numpy.load(os.path.join('output', filename)))

Take a look at the docs for detailed instructions, but very briefly, the create_earray function takes 1) a data root or parent node; 2) an array name; 3) a datatype atom; 4) a shape with a 0 in the dimension you want to expand; 5) a verbose descriptor; 6) a compression filter; and 7) an expected number of rows along the expandable dimension. Only the first two are required, but you'll probably use all seven in practice. The function accepts a few other optional arguments as well; again, see the docs for details.

Once the array is created, you can use its append method in the expected way.

Solution 3:[3]

If you have a very large single CSV file, you may want to stream the conversion to hdf, e.g.:

import numpy as np
import pandas as pd
from IPython.display import clear_output

CHUNK_SIZE = 5000000 

filename = 'data.csv'
dtypes = {'latitude': float, 'longitude': float}

iter_csv = pd.read_csv(
    filename, iterator=True,
    dtype=dtypes, encoding='utf-8', chunksize=CHUNK_SIZE)

cnt = 0
for ix, chunk in enumerate(iter_csv):
    chunk.to_hdf(
        "data.hdf", 'data', format='table', append=True)
    cnt += CHUNK_SIZE
    clear_output(wait=True)
    print(f"Processed {cnt:,.0f} coordinates..")

Tested with a 64GB CSV file and 450 Million coordinates (about 10 Minutes conversion).

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2
Solution 3 Alex