Skip to content

BUG: json_normalize does not parse nested lists consistently #53126

Open
@marickmanrho

Description

@marickmanrho

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd

nested_array_data = {
    "B": {"A": [[1,2],[3,4]]}
}

# No record path
df1 = pd.json_normalize(nested_array_data)
print('df1:\n', df1.head())

# Partial record path
try:
    df2 = pd.json_normalize(nested_array_data, record_path=["B"])
except TypeError as e:
    print('df2:\n', f"TypeError: {e}")

# Full record path
df3 = pd.json_normalize(nested_array_data, record_path=['B', 'A'])
print('df3:\n', df3.head())

Issue Description

Depending on the record_path supplied to json_normalize you get different results. This is unexpected as, in this example, the record_path should only change the name of the columns and not the rows itself. Furthermore, it throws an error in one case which is not supposed to throw an error based on the documentation.

From the example above this is the output:

# No record path
df1 = pd.json_normalize(nested_array_data)
B.A
0 [[1, 2], [3, 4]]
# Partial record path
df2 = pd.json_normalize(nested_array_data, record_path=["B"])

TypeError: {'B': {'A': [[1, 2], [3, 4]]}} has non list value {'A': [[1, 2], [3, 4]]} for path B. Must be list or null.

# Full record path
df3 = pd.json_normalize(nested_array_data, record_path=['B', 'A'])
0 1
0 1 2
1 3 4

Expected Behavior

As far as I can tell, there are two options for expected behavior.

  1. Always expand lists when they are encountered
pd.json_normalize(nested_array_data, record_path=['B'])
A.0.0 A.0.1 A.1.0 A.1.1
0 1 2 3 4
  1. Lists are not expended when encountered
pd.json_normalize(nested_array_data, record_path=['B'])
A
0 [[1,2],[3,4]]

A toggle could be implemented to switch between behaviors as proposed by #42311. One could also implement a more fine grained control over list expansion like proposed in #27241. In my opinion it is best to not expand list data by default as this leads to the least complications for most input data.

Installed Versions

INSTALLED VERSIONS

commit : 8dab54d
python : 3.10.8.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 142 Stepping 12, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Dutch_Netherlands.1252

pandas : 1.5.2
numpy : 1.23.4
pytz : 2022.1
dateutil : 2.8.2
setuptools : 65.5.0
pip : 22.3.1
Cython : None
pytest : 7.1.2
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 8.13.2
pandas_datareader: None
bs4 : None
bottleneck : 1.3.5
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.6.2
numba : None
numexpr : 2.8.4
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.9.3
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None

Metadata

Metadata

Assignees

No one assigned

    Labels

    BugIO JSONread_json, to_json, json_normalize

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions