Skip to content

No mechanism to enforce schema in to_parquet #30189

Closed
@ieaves

Description

@ieaves

Code Sample, a copy-pastable example if possible

import pandas as pd
import dask as dd

df1 = pd.DataFrame({'col':  pd.Series([True, False], dtype='object)})
df2 = pd.DataFrame({'col':  pd.Series([None, None], dtype='object)})

df1.to_parquet('file1', engine='pyarrow')
df1.to_parquet('file2', engine='pyarrow')

out_df = dd.read_parquet(['file1', 'file2'], engine='pyarrow').compute()

Problem description

When writing parquet files using pandas there is no mechanism to enforce schema on the output file for the pyarrow engine; without this ability users are forced to fall back on pyarrows inferred schema which isn't always correct. This poses particular challenges when writing multiple files in sequence where partitions contain some or exclusively missing values.

In the example above file1 is written with schema bool while file2 is written with schema null. When attempting to load both partitions using something like dask this generates errors resulting from schema mismatch.

pyarrow exposes the functionality to solve this problem by passing a schema argument to pa.parquet.Table but all additional keyword arguments to to_parquet in pandas are passed to pa.parquet.write_table.

        if index is None:
            from_pandas_kwargs = {}
        else:
            from_pandas_kwargs = {"preserve_index": index}
        table = self.api.Table.from_pandas(df, **from_pandas_kwargs)
        if partition_cols is not None:
            self.api.parquet.write_to_dataset(
                table,
                path,
                compression=compression,
                coerce_timestamps=coerce_timestamps,
                partition_cols=partition_cols,
                **kwargs,
            )
        else:
            self.api.parquet.write_table(
                table,
                path,
                compression=compression,
                coerce_timestamps=coerce_timestamps,
                **kwargs,
            )

This could be solved by adding a schema argument to to_parquet which is then added to from_pandas_kwargs before invokation of self.api.Table.from_pandas(df, **from_pandas_kwargs).

Expected Output

col
0 True
1 False
0 None
1 None

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.7.4.final.0
python-bits : 64
OS : Darwin
OS-release : 18.7.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 0.25.3
numpy : 1.17.2
pytz : 2019.3
dateutil : 2.8.0
pip : 19.2.3
setuptools : 41.4.0
Cython : 0.29.13
pytest : 5.2.1
hypothesis : None
sphinx : 2.2.0
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : 2.8.3 (dt dec pq3 ext lo64)
jinja2 : 2.10.3
IPython : 7.8.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : 0.3.2
gcsfs : None
lxml.etree : None
matplotlib : 3.1.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.15.1
pytables : None
s3fs : None
scipy : 1.3.1
sqlalchemy : 1.3.9
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions