Open
Description
As discussed in #319 (review)
It would be nice to compute the memory_usage of big pandas dataframes. One proposed solution is to do:
sum([
size_of(dtype) * count
for dtype, count in x.dtypes.value_counts().items()
]) * len(x)