Skip to content

REF: combine masked_cummin_max, cummin_max #46142

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 27, 2022

Conversation

jbrockmendel
Copy link
Member

more closely match pattern we use for other mask-supporting functions in this file

if iu_64_floating_t is float64_t or iu_64_floating_t is float32_t:
if uses_mask:
na_possible = True
# Will never be used, just to avoid uninitialized warning
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could just define na_val = 0 in the cdef above?

@mroeschke mroeschke added the Refactor Internal refactoring of code label Feb 25, 2022
@jreback jreback added this to the 1.5 milestone Feb 26, 2022
@jreback
Copy link
Contributor

jreback commented Feb 26, 2022

assume this is neutral or postive on perf?

@jbrockmendel
Copy link
Member Author

assume this is neutral or postive on perf?

neutral

if not skipna and na_possible and seen_na[lab, j]:
out[i, j] = na_val
if uses_mask:
mask[i, j] = 1 # FIXME: shouldn't alter inplace
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

follow-up in the works to fix this

@jreback jreback merged commit ffeb205 into pandas-dev:main Feb 27, 2022
@jbrockmendel jbrockmendel deleted the ref-masked_cummin_max branch February 27, 2022 20:36
yehoshuadimarsky pushed a commit to yehoshuadimarsky/pandas that referenced this pull request Jul 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Refactor Internal refactoring of code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants