-
-
Notifications
You must be signed in to change notification settings - Fork 18.6k
BUG: pd.concat with identical key leads to multi-indexing error #46546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @rhshadrach
@@ -705,7 +705,7 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None) -> MultiInde | |||
names = [None] | |||
|
|||
if levels is None: | |||
levels = [ensure_index(keys)] | |||
levels = [ensure_index(keys).unique()] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm shouldn't this be the case for a specified levels as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we check whether the level is unique before? If not, raise ValueError. The doc says it should be unique.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find we actually do not have the check for depulicated levels in concat function. Something like the following will not raise. Since this problem is an isolated one, thus I will make another PR to escape from confusion.
df1 = pd.DataFrame({"A": [1]}, index=["x"])
df2 = pd.DataFrame({"A": [1]}, index=["y"])
pd.concat([df1, df2], levels=[["x", "y", "y"]]) # should raise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GYHHAHA - Agreed what you are pointing out is a separate issue, but here is an example that @jreback was referring to.
df1 = pd.DataFrame({"x": [1, 2], "y": [3, 4], "z": [5, 6]}).set_index(["x", "y"])
df2 = pd.DataFrame({"x": [7, 8], "y": [9, 10], "z": [11, 12]}).set_index(["x", "y"])
result = pd.concat([df1, df2, df1], keys=["x", "y", "x"], levels=[["x", "y", "x"]])
print(result.loc['x', 1, 3])
This also raises the same error that is being addressed here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why you loc with string '1' and '3' instead of numeric value? @rhshadrach
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, now I get the error. I will look into this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good! I believe you just need to apply your change to the else
clause highlighted here (but I could be wrong).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But I believe this is caused by duplicated levels input, if levels is [["x", "y"]], then it works fine. Maybe more suitable to add this to another PR related to unique levels keyword. @rhshadrach
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah - I see your point; the user should not ever specify a level with duplicate values and so we can raise here instead. That makes sense to separate this off into a different PR; can you see if there is an issue for this already and open one if there isn't?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that such issue doesn't exist now. I'll open one then link a pr to that after refining the performance warning check for the current PR. And also since we will raise for a duplicated level, then unique()
for else
clause is unnecessary.
df2 = DataFrame({"name": [2]}) | ||
df3 = DataFrame({"name": [3]}) | ||
df_a = concat([df1, df2, df3], keys=["x", "y", "x"]) | ||
with tm.assert_produces_warning(PerformanceWarning): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is showing the performance warning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PerformanceWarning: indexing past lexsort depth may impact performance.
since multi-index is unsorted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you specify match="indexing past lexsort depth"
. This not only makes the check more specific, but also anyone reading the test will get your answer to @jreback's question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, no problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small request in the test, and I agree the path @jreback pointed out should be fixed and tested here too.
df2 = DataFrame({"name": [2]}) | ||
df3 = DataFrame({"name": [3]}) | ||
df_a = concat([df1, df2, df3], keys=["x", "y", "x"]) | ||
with tm.assert_produces_warning(PerformanceWarning): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you specify match="indexing past lexsort depth"
. This not only makes the check more specific, but also anyone reading the test will get your answer to @jreback's question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok i think this is good cc @rhshadrach
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Thanks @GYHHAHA |
pd.concat([], keys=)
with identical key has trouble with MultiIndex.loc[]
#46519doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.