Skip to content

Improve documentation of slice::sort_by_key and slice::sort_unstable_by_key #71132

Closed
@MiSawa

Description

@MiSawa

The current documentation for slice::sort_by_key and slice::sort_unstable_by_key method claims that for a slice of n elements, the method takes O(m n log(m n)) worst-case, where the key function is O(m).

rust/src/liballoc/slice.rs

Lines 257 to 258 in d28a464

/// This sort is stable (i.e., does not reorder equal elements) and `O(m n log(m n))`
/// worst-case, where the key function is `O(m)`.

Although this is not wrong, this time complexity is not tight and can be replaced with O(m n log n) as far as my understanding.

Brief explanation relying on another doc claiming the time complexity of merge_sort, which slice::sort_by_key relies on, is O(n log n) worst-case;
Since its running time is O(n log n), merge_sort can call is_less only O(n log n) times. Each call of is_less makes 2 calls of the key function, so the total time complexity taken by the key function is O((2 n log n) * m) = O(m n log n).
Because other part of merge_sort takes O(n log n) as the doc states, its total running time is O(n log n + m n log n) = O(m n log n).
Exact same discussion applies to sort_unstable_by_key + sort::quicksort.

Metadata

Metadata

Assignees

No one assigned

    Labels

    C-enhancementCategory: An issue proposing an enhancement or a PR with one.T-libs-apiRelevant to the library API team, which will review and decide on the PR/issue.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions