Skip to content

Commit c4e2e96

Browse files
authored
Merge pull request #223 from alimanfoo/store-cache-20171230
LRU store cache
2 parents e25d843 + c463fa7 commit c4e2e96

11 files changed

+603
-53
lines changed

docs/api/storage.rst

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,18 @@ Storage (``zarr.storage``)
2121
.. automethod:: close
2222
.. automethod:: flush
2323

24+
.. autoclass:: LRUStoreCache
25+
26+
.. automethod:: invalidate
27+
.. automethod:: invalidate_values
28+
.. automethod:: invalidate_keys
29+
2430
.. autofunction:: init_array
2531
.. autofunction:: init_group
32+
.. autofunction:: contains_array
33+
.. autofunction:: contains_group
34+
.. autofunction:: listdir
35+
.. autofunction:: rmdir
36+
.. autofunction:: getsize
37+
.. autofunction:: rename
2638
.. autofunction:: migrate_1to2

docs/release.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,11 @@ Enhancements
127127
* **Added support for ``datetime64`` and ``timedelta64`` data types**;
128128
:issue:`85`, :issue:`215`.
129129

130+
* **New LRUStoreCache class**. The class :class:`zarr.storage.LRUStoreCache` has been
131+
added and provides a means to locally cache data in memory from a store that may be
132+
slow, e.g., a store that retrieves data from a remote server via the network;
133+
:issue:`223`.
134+
130135
* **New copy functions**. The new functions :func:`zarr.convenience.copy` and
131136
:func:`zarr.convenience.copy_all` provide a way to copy groups and/or arrays
132137
between HDF5 and Zarr, or between two Zarr groups. The

docs/tutorial.rst

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -729,6 +729,9 @@ group (requires `lmdb <http://lmdb.readthedocs.io/>`_ to be installed)::
729729
>>> z[:] = 42
730730
>>> store.close()
731731

732+
Distributed/cloud storage
733+
~~~~~~~~~~~~~~~~~~~~~~~~~
734+
732735
It is also possible to use distributed storage systems. The Dask project has
733736
implementations of the ``MutableMapping`` interface for Amazon S3 (`S3Map
734737
<http://s3fs.readthedocs.io/en/latest/api.html#s3fs.mapping.S3Map>`_), Hadoop
@@ -767,6 +770,37 @@ Here is an example using S3Map to read an array created previously::
767770
>>> z[:].tostring()
768771
b'Hello from the cloud!'
769772

773+
Note that retrieving data from a remote service via the network can be significantly
774+
slower than retrieving data from a local file system, and will depend on network latency
775+
and bandwidth between the client and server systems. If you are experiencing poor
776+
performance, there are several things you can try. One option is to increase the array
777+
chunk size, which will reduce the number of chunks and thus reduce the number of network
778+
round-trips required to retrieve data for an array (and thus reduce the impact of network
779+
latency). Another option is to try to increase the compression ratio by changing
780+
compression options or trying a different compressor (which will reduce the impact of
781+
limited network bandwidth). As of version 2.2, Zarr also provides the
782+
:class:`zarr.storage.LRUStoreCache` which can be used to implement a local in-memory cache
783+
layer over a remote store. E.g.::
784+
785+
>>> s3 = s3fs.S3FileSystem(anon=True, client_kwargs=dict(region_name='eu-west-2'))
786+
>>> store = s3fs.S3Map(root='zarr-demo/store', s3=s3, check=False)
787+
>>> cache = zarr.LRUStoreCache(store, max_size=2**28)
788+
>>> root = zarr.group(store=cache)
789+
>>> z = root['foo/bar/baz']
790+
>>> from timeit import timeit
791+
>>> # first data access is relatively slow, retrieved from store
792+
... timeit('print(z[:].tostring())', number=1, globals=globals()) # doctest: +SKIP
793+
b'Hello from the cloud!'
794+
0.1081731989979744
795+
>>> # second data access is faster, uses cache
796+
... timeit('print(z[:].tostring())', number=1, globals=globals()) # doctest: +SKIP
797+
b'Hello from the cloud!'
798+
0.0009490990014455747
799+
800+
If you are still experiencing poor performance with distributed/cloud storage, please
801+
raise an issue on the GitHub issue tracker with any profiling data you can provide, as
802+
there may be opportunities to optimise further either within Zarr or within the mapping
803+
interface to the storage.
770804

771805
.. _tutorial_copy:
772806

zarr/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
from zarr.creation import (empty, zeros, ones, full, array, empty_like, zeros_like,
88
ones_like, full_like, open_array, open_like, create)
99
from zarr.storage import (DictStore, DirectoryStore, ZipStore, TempStore,
10-
NestedDirectoryStore, DBMStore, LMDBStore)
10+
NestedDirectoryStore, DBMStore, LMDBStore, LRUStoreCache)
1111
from zarr.hierarchy import group, open_group, Group
1212
from zarr.sync import ThreadSynchronizer, ProcessSynchronizer
1313
from zarr.codecs import *

zarr/compat.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,16 @@
1616
class PermissionError(Exception):
1717
pass
1818

19+
def OrderedDict_move_to_end(od, key):
20+
od[key] = od.pop(key)
21+
22+
1923
else: # pragma: py2 no cover
2024

2125
text_type = str
2226
binary_type = bytes
2327
from functools import reduce
2428
PermissionError = PermissionError
29+
30+
def OrderedDict_move_to_end(od, key):
31+
od.move_to_end(key)

0 commit comments

Comments
 (0)