Description
Code Sample, a copy-pastable example if possible
import pandas as pd
import numpy as np
test = pd.DataFrame(
{
'col1': pd.Series(pd.Categorical(["a", "b", "c", "a"],categories=["a", "b", "c", "d"])),
'col2': np.random.randint(0,100, size=4),
'col3': [12,32,23,22]
}
)
test['col2'] = pd.cut(test['col2'], [0,50,100])
# Group by two variables
print(test.groupby(['col1', 'col2']).sum())
# Group by a single variable
print(test.groupby(['col1']).sum())
This gives the result:
col1 col2
a (0, 50] 22.0
(50, 100] 12.0
b (0, 50] NaN
(50, 100] 32.0
c (0, 50] 23.0
(50, 100] NaN
d (0, 50] NaN
(50, 100] NaN
col3
col1
a 34
b 32
c 23
d 0
Problem description
Clearly there is no record for category 'd'. When you group by one column, category 'd' result in sum of 0. When you do group by two columns, those non-existed row result in NaN. Those two are inconsistent.
Expected Output
Both results for d
rows should be the same. I think NaN makes more sense.
Output of pd.show_versions()
INSTALLED VERSIONS
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Darwin
OS-release : 19.2.0
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.3
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 44.0.0.post20200102
Cython : None
pytest : 5.3.2
hypothesis : None
sphinx : 2.3.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : 3.1.2
numexpr : 2.7.1
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 0.15.1
pytables : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : 3.6.1
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
Cross ref to dask/dask#5838