Skip to content

BUG: read_parquet converts pyarrow list type to numpy dtype #53011

Open
@danielhanchen

Description

@danielhanchen

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
import pyarrow as pa
pyarrow_list_of_strings = pd.ArrowDtype(pa.list_(pa.string()))
data = pd.DataFrame({
    "Pyarrow" : pd.Series([["a"], ["a", "b"]], dtype = pyarrow_list_of_strings),
})
data.to_parquet("data.parquet") # SUCCESS
pd.read_parquet("data.parquet") # *** FAIL

data_object = pd.DataFrame({
    "Pyarrow" : pd.Series([["a"], ["a", "b"]], dtype = object),
})
data_object.to_parquet("data.parquet")
pyarrow_internal = pa.parquet.read_table("data.parquet") # SUCCESS with type list[string]
pyarrow_internal .to_pandas() # SUCCESS except object now

pd.Series(pd.arrays.ArrowExtensionArray(pyarrow_internal["Pyarrow"])) # SUCCESS - data-type also correct!

Issue Description

Great work on extending Arrow to Pandas!
Using pd.ArrowDtype(pa.list_(pa.string())) or any other alteration works in the Parquet saving mode, but fails during the reading of the parquet file.

In fact, if there is a Pandas Series of pure lists of strings for eg ["a"], ["a", "b"], Parquet saves it internally as a list[string] type. When Pandas reads the parquet file, it then converts it to an object type.

Is there a way during the reading step to either:

  1. Convert the data-type like in the pure list mode to an object type OR
  2. pd.Series(pd.arrays.ArrowExtensionArray(x)) seems to actually work! Maybe during the conversion from the internal Pyarrow representation into Pandas, we can use pd.Series(pd.arrays.ArrowExtensionArray(x)) on columns which had errors? OR
  3. Somehow support these new types?

Expected Behavior

import pandas as pd
import pyarrow as pa
pyarrow_list_of_strings = pd.ArrowDtype(pa.list_(pa.string()))
data = pd.DataFrame({
    "Pyarrow" : pd.Series([["a"], ["a", "b"]], dtype = pyarrow_list_of_strings),
})
data.to_parquet("data.parquet") # SUCCESS
pd.read_parquet("data.parquet") # SUCCESS

Installed Versions

INSTALLED VERSIONS

commit : 37ea63d
python : 3.11.3.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 30 Stepping 5, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_Australia.1252

pandas : 2.0.1
numpy : 1.24.3
pytz : 2023.3
dateutil : 2.8.2
setuptools : 67.7.2
pip : 23.1.2
Cython : 0.29.34
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.12.0
pandas_datareader: None
bs4 : 4.12.2
bottleneck : None
brotli :
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.7.1
numba : 0.57.0rc1
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 11.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
zstandard : 0.21.0
tzdata : 2023.3
qtpy : None
pyqt5 : None

Metadata

Metadata

Assignees

No one assigned

    Labels

    Arrowpyarrow functionalityBugIO Parquetparquet, feather

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions