Frequently Asked Questions

I have a massive CSV file which I can not fit all into memory at one time. How do I convert it to HDF5?

We are working to make this process an easy one liner. In the meantime, consider this strategy: read the CSV file in chunks, and use vaex to export each chunk to disk. Since all resulting HDF5 files will have the same structure, one can use vaex.open(part*) to open all chunks as a single DataFrame. For a small performance improvement, that DataFrame can be exported to disk in a single large HDF5 file.

Consider the following code example:

for i, chunk in enumerate(vaex.read_csv('/path/to/data/BigData.csv', chunksize=100_000)):
    df_chunk = vaex.from_pandas(chunk, copy_index=False)
    export_path = f'/path/to/data/part_{i}.hdf5'
    df_chunk.export_hdf5(export_path)

df = vaex.open('/path/to/data/part*')
df.export_hdf5('/path/to/data/Final.hdf5')

Why can’t I open a HDF5 file that was exported from a pandas DataFrame via the .to_hdf?

When one uses the pandas .to_hdf method, the output HDF5 file has a row based format. Vaex on the other hand expects column based HDF5 files. This allows for more efficient reading of data column, which is much more commonly required for data science applications.

One can easily export a pandas DataFrame to a vaex friendly HDF5 file:

vaex_df = vaex.from_pandas(pandas_df, copy_index=False)
vaex_df.export_hdf5('my_data.hdf5')