Skip to content

Documented BinaryHeap performance. #59698

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions src/liballoc/collections/binary_heap.rs
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,11 @@ use super::SpecExtend;
/// trait, changes while it is in the heap. This is normally only possible
/// through `Cell`, `RefCell`, global state, I/O, or unsafe code.
///
/// The costs of `push` and `pop` operations are `O(log(n))` whereas `peek`
/// can be performed in `O(1)` time. Note that the cost of a `push`
/// operation is an amortized cost which does not take into account potential
/// re-allocations when the current buffer cannot hold more elements.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I understand what this is saying but I find it somewhat misleading as written. The amortized cost absolutely does take into account the reallocations. Including all the time spent reallocating and copying, the amortized cost is O(log(n)).

There are various ways to analyze this. See https://en.wikipedia.org/wiki/Potential_method#Dynamic_array for one approach. I believe our BinaryHeap push does the O(1) amortized amount of work described in the link plus a O(log(n)) worst case amount of work to maintain the binary heap property.

Copy link
Contributor Author

@DevQps DevQps Apr 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dtolnay Just to be sure I get it right this time:

Amortized costs are like "the average costs of a function" if uncle Google didn't lie to me :)
So that makes me deduce these things:

  • pop and peek are always constant since they do not perform and reallocation.
  • push does perform reallocation, but only once every while. So could you say that the average cost is estimated as O(1) since it's so little?
  • I would suspect that the worst case scenario is O(N) because the entire buffer with N elements need to be copied to a new memory region. Could you explain why you believe it is O(log(n))? The link that you shared says this:

Combining this with the inequality relating amortized time and actual time over sequences of operations, this shows that any sequence of n dynamic array operations takes O(n) actual time in the worst case

Allocating a new internal array A and copying all of the values from the old internal array to the new one takes O(n) actual time

The big O notation and complexity is not really my turf so I am glad you're here :)

Btw if you feel like it might just be easier to write a few sentences yourself, feel free to do so, then I will add them to this merge request (Y).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amortized costs are like "the average costs of a function"

In some sense, although saying it this way is ambiguous between whether it is an average over all possible inputs to the same call (which std::collections calls "expected cost" when averaging over hash functions and hashed values) or an average over a sequence of calls (which is "amortized cost").

I would recommend thinking of amortized cost as a worst case cost per call of a large number of calls.

  • pop and peek are always constant since they do not perform any reallocation.

Reallocation is not the only cost. Pop does O(log(n)) work to preserve the binary heap "shape property" and "heap property": https://en.wikipedia.org/wiki/Binary_heap

  • push does perform reallocation, but only once every while. So could you say that the average cost is estimated as O(1) since it's so little?

It isn't an estimate and "since it's so little" isn't really the reason. Previous link explains how to show formally that the worst case cost of many calls is O(1) per call.

  • I would suspect that the worst case scenario is O(N) because the entire buffer with N elements need to be copied to a new memory region. Could you explain why you believe it is O(log(n))?

The part you quoted from the link says that n array insertions take O(n) time so the amortized time is O(1) each. Binary heap does some more work beyond that to maintain "shape property" and "heap property" which takes O(log(n)) time in the worst case. Adding these up, the amortized cost is O(log(n)).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dtolnay Thanks for your response and sorry for my late response. I read through the article and I think I am slowly starting to understand what it means. Because it's O(N) for n insertions it's N/1 aka O(1) per insertion.

So technically speaking I can say it like this:

  • peek: O(1) always
  • push: O(1), amortized: O(log(n): Because it calls sift_up for maintaining the 'sorted' Binary Heap property.
  • pop: O(1), amortized: O(log(n): Because it calls sift_down_to_bottom for maintaining the 'sorted' Binary Heap property.

I rephrased the description! Hopefully, it's good this time. If you don't agree, could you maybe do a suggestion for a description?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dtolnay I hope you still have time to respond to my previous comment!

///
/// # Examples
///
/// ```
Expand Down