Description
Is your feature request related to a problem?
While working on #38745, I was expecting this to be allowed:
import pandas as pd
result = pd.concat(
[
pd.Series(0, index=[0, 0, 1, 2]),
pd.Series(1, index=[1, 2]),
],
join="inner",
)
expected = pd.DataFrame({0: [0, 0], 1: [1, 1]}, index=[1, 2])
pd.testing.assert_frame_equal(result, expected)
Because the intersection of those indices is well defined. However, it turns out this does not work – pd.concat
errors – and has not worked recently (v1.1.5). This is related to allowing .get_indexer
when result is uniquely specified (#38773), but I'll go into that more below.
Describe the solution you'd like
I would like this to work.
I think this would also require these to work (they currently error):
pd.Series(np.arange(4), index=[0, 0, 1, 2]).reindex([1, 2])
pd.Index([0, 0, 1, 2]).get_index([1, 2])
This could coincide with better error messages when the results are ambiguous, e.g. DuplicateLabelErrors
where the duplicates are explicitly named.
API breaking implications
I think this is a subtle change to the API, which should not break much. It mostly will turn some errors into working cases, but those cases should have unambiguous results.
I'd divide the behaviour changes into indexing and API functions. These indexing functions would change:
Index.get_indexer
obj.reindex
obj.reindex_like
These API functions would change, though any function which relied on returning results from the intersection of indices could be affected.
pd.concat(..., join="inner")
pd.crosstab
Describe alternatives you've considered
- There could be no change, and continue to error. Maybe it's better to just encourage unique indices in all cases. But sometimes duplicates do happen.
- There could be some changes (e.g. only change this for
pd.concat
, make it handle the indices)
It could continue to be an error. It was an error in 1.1.5, and is currently.
Additional context
The exact behavior of combining indices with set operations seems to have a long history here, so I cannot provide a full context. I will go into detail about a bit of history I've come across while looking into this issue.
join="inner"
(Specific to concat
)
It is a bit strange that the concat(..., join="inner")
gets the intersection of the indices, when an inner join would be a bit different. It's documented that concat
uses set operations on the "other axis", so I don't think this is a huge issue. the value "inner"
does suggest different behaviour, where duplicates would always be allowed.
I think this has been a source of confusion on concat
s handling of duplicates (#36263). With set operations, there's no way to handle duplicates (unless we take some other definition of set (#6963 (comment))). With joins, you'll be taking the cross product (which is often undesirable #36290).
Performance
There should only be a performance penalty when:
- There are duplicated indices, so we need to check if they would be in the result
- This may happen multiple times. In concat it would happen once when intersecting indices, and again with
get_index
. Is this so bad? Alternatively, there could be an "unsafeget_indexer/ reindex
where the user promises the passed index has a unique solution, and no check occurs.
- This may happen multiple times. In concat it would happen once when intersecting indices, and again with
- Checks if indexes are unique, which are cached. Wouldn't be necessary to check when
allows_duplicate_labels=False
.
.reindex
, .get_index
Ultimately, I think this is an issue about what operations are unambiguous when there are duplicate values on the indices. Because of this, there may need to be other changes in behaviour this request consistent with the rest of library and reasonable to implement. For instance:
s1 = pd.Series(0, index=[0, 0, 1, 2])
s2 = pd.Series(1, index=[1, 2])
shared_indices = s1.index.intersection(s2.index)
s1.reindex(shared_indices) # This currently errors
This currently errors, since the behaviour follows this logic (stated in #28257):
...
.reindex
should allow targets to be duplicated but not the sources. However, currently in allows the source to be duplicated but not the targets, which is exactly the wrong behaviour.
This is a bit similar to this statement (#8849):
I propose that when a series has a duplicate index, the method reindex should always raise an exception, because when a series with a duplicate index is to be conformed to a new index, the intended behaviour is always ambiguous.
I think neither is quite right. I think that reindex
, get_index
, etc. should allow any operation where the result is uniquely defined. If the source has duplicates, and the target includes those duplicates, the result is ambiguous. If the source has duplicates, but the target does not contain them, the result is uniquely specified.
# ambiguous, should not be allowed
pd.Index([0, 0, 1, 2]).get_index([0, 1])
# unambiguous, should be allowed
pd.Index([0, 0, 1, 2]).get_index([1, 2])
Additional example with pd.crosstab
Current behaviour of pd.crosstab
pd.crosstab(
pd.Series(np.arange(4), index=[0, 1, 1, 2]),
pd.Series(np.arange(2), index=[0, 2]),
)
# ValueError: cannot reindex from a duplicate axis
pd.crosstab(
pd.Series(np.arange(4), index=[0, 1, 1, 2]),
pd.Series(np.arange(2), index=[3, 4]),
)
# Empty DataFrame
# Columns: []
# Index: []
This proposal would also allow the first example to work, resulting in:
col_0 0 1
row_0
0 1 0
3 0 1