[h5py] 222/455: Tests and docs
Ghislain Vaillant
ghisvail-guest at moszumanska.debian.org
Thu Jul 2 18:19:35 UTC 2015
This is an automated email from the git hooks/post-receive script.
ghisvail-guest pushed a commit to annotated tag 1.3.0
in repository h5py.
commit ecdccee612e767a681371dbc602fb6f27cbd915c
Author: andrewcollette <andrew.collette at gmail.com>
Date: Sat Feb 7 22:10:46 2009 +0000
Tests and docs
---
ANN.txt | 103 ++++++++++++++++++++++++---------------------
docs/source/guide/hl.rst | 28 +-----------
h5py/__init__.py | 4 +-
h5py/highlevel.py | 2 +-
h5py/selections.py | 6 +++
h5py/tests/__init__.py | 2 +-
h5py/tests/test_slicing.py | 27 ++++++++----
7 files changed, 85 insertions(+), 87 deletions(-)
diff --git a/ANN.txt b/ANN.txt
index 3edd751..316a69b 100644
--- a/ANN.txt
+++ b/ANN.txt
@@ -1,5 +1,5 @@
=====================================
-Announcing HDF5 for Python (h5py) 1.0
+Announcing HDF5 for Python (h5py) 1.1
=====================================
What is h5py?
@@ -17,81 +17,86 @@ random-access I/O on desired sections. Datasets are organized in a
filesystem-like hierarchy using containers called "groups", and
accesed using the tradional POSIX /path/to/resource syntax.
-This is the fourth major release of h5py, and represents the end
-of the "unstable" (0.X.X) design phase.
+In addition to providing interoperability with existing HDF5 datasets
+and platforms, h5py is a convienient way to store and retrieve
+arbitrary NumPy data from disk.
-Why should I use it?
---------------------
-H5py provides a simple, robust read/write interface to HDF5 data
-from Python. Existing Python and NumPy concepts are used for the
-interface; for example, datasets on disk are represented by a proxy
-class that supports slicing, and has dtype and shape attributes.
-HDF5 groups are are presented using a dictionary metaphor, indexed
-by name.
+New features in 1.1
+-------------------
-A major design goal of h5py is interoperability; you can read your
-existing data in HDF5 format, and create new files that any HDF5-
-aware program can understand. No Python-specific extensions are
-used; you're free to implement whatever file structure your application
-desires.
+ - A new compression filter based on the LZF library, which provides
+ transparent compression many times faster than the standard HDF5
+ GZIP filter.
-Almost all HDF5 features are available from Python, including things
-like compound datatypes (as used with NumPy recarray types), HDF5
-attributes, hyperslab and point-based I/O, and more recent features
-in HDF 1.8 like resizable datasets and recursive iteration over entire
-files.
+ - Efficient broadcasting using HDF5 hyperslab selections; for example,
+ you can write to a (100 x 100 x 50) selection from a (100 x 50) array.
-The foundation of h5py is a near-complete wrapping of the HDF5 C API.
-HDF5 identifiers are first-class objects which participate in Python
-reference counting, and expose the C API via methods. This low-level
-interface is also made available to Python programmers, and is
-exhaustively documented.
+ - High-level access to HDF5 dataspace selections, including hyperslabs
+ and point-based I/O.
-See the Quick-Start Guide for a longer introduction with code examples:
+ - Now installable via easy_install
+
+ - Now supports the NumPy boolean type
+
+
+Standard features
+-----------------
+
+ - Supports storage of NumPy data of the following types:
+
+ * Integer/Unsigned Integer
+ * Float/Double
+ * Complex/Double Complex
+ * Compound ("recarray")
+ * Strings
+ * Boolean
+ * Array (as members of a compound type only)
+ * Void
+
+ - Random access to datasets using the standard NumPy slicing syntax
+
+ - Transparent compression of datasets using GZIP, LZF or SZIP,
+ and error-detection using Fletcher32
+
+ - "Pythonic" interface supporting dictionary and NumPy-array metaphors
+ for the high-level HDF5 abstrations like groups and datasets
+
+ - A comprehensive, object-oriented wrapping of the HDF5 low-level C API
+ via Cython, in addition to the NumPy-like high-level interface.
+
+ - Supports many new features of HDF5 1.8, including recursive iteration
+ over entire files and in-library copy operations on the file tree
- http://h5py.alfven.org/docs/guide/quick.html
Where to get it
---------------
* Main website, documentation: http://h5py.alfven.org
+
* Downloads, bug tracker: http://h5py.googlecode.com
-* The HDF group website also contains a good introduction:
- http://www.hdfgroup.org/HDF5/doc/H5.intro.html
Requires
--------
-* UNIX-like platform (Linux or Mac OS-X); Windows version is in progress.
-* Python 2.5 or 2.6
-* NumPy 1.0.3 or later (1.1.0 or later recommended)
-* HDF5 1.6.5 or later, including 1.8. Some features only available
- when compiled against HDF5 1.8.
-* Optionally, Cython (see cython.org) if you want to use custom install
- options. You'll need version 0.9.8.1.1 or later.
+* Linux, Mac OS-X or Windows
-About this version
-------------------
+* Python 2.5 (Windows), Python 2.5 or 2.6 (Linux/Mac OS-X)
-Version 1.0 follows version 0.3.1 as the latest public release. The
-major design phase (which began in May of 2008) is now over; the design
-of the high-level API will be supported as-is for the rest of the 1.X
-series, with minor enhancements.
+* NumPy 1.0.3 or later
-This is the first version to support Python 2.6, and the first to use
-Cython for the low-level interface. The license remains 3-clause BSD.
+* HDF5 1.6.5 or later (including 1.8); HDF5 is included with
+ the Windows version.
-** This project is NOT affiliated with The HDF Group. **
Thanks
------
Thanks to D. Dale, E. Lawrence and other for their continued support
-and comments. Also thanks to the PyTables project, for inspiration
-and generously providing their code to the community, and to everyone
-at the HDF Group for creating such a useful piece of software.
+and comments. Also thanks to the Francesc Alted and the PyTables project,
+for inspiration and generously providing their code to the community. Thanks
+to everyone at the HDF Group for creating such a useful piece of software.
diff --git a/docs/source/guide/hl.rst b/docs/source/guide/hl.rst
index bc861c0..a349fed 100644
--- a/docs/source/guide/hl.rst
+++ b/docs/source/guide/hl.rst
@@ -608,24 +608,6 @@ the full range of HDF5 dataspace selection techniques, including point-based
selection and selection via overlapping hyperslabs. These are especially
useful for read_direct and write_direct.
-Value attribute and scalar datasets
------------------------------------
-
-HDF5 allows you to store "scalar" datasets. These have the shape "()". You
-can use the syntax ``dset[...]`` to recover the value as an 0-dimensional
-array. Also, the special attribute ``value`` will return a scalar for an 0-dim
-array, and a full n-dimensional array for all other cases:
-
- >>> f["ArrayDS"] = numpy.ones((2,2))
- >>> f["ScalarDS"] = 1.0
- >>> f["ArrayDS"].value
- array([[ 1., 1.],
- [ 1., 1.]])
- >>> f["ScalarDS"].value
- 1.0
- >>> f["ScalarDS"][...]
- array(1.0)
-
Length and iteration
--------------------
@@ -704,14 +686,14 @@ Reference
Read directly from HDF5 into an existing NumPy array. The "source_sel"
and "dest_sel" arguments may be Selection instances (from the
- selections module) or the output of "numpy.s_". Standard broadcasting
+ selections module) or the output of ``numpy.s_``. Standard broadcasting
is supported.
.. method:: write_direct(source, source_sel=None, dest_sel=None)
Write directly to HDF5 from a NumPy array. The "source_sel"
and "dest_sel" arguments may be Selection instances (from the
- selections module) or the output of "numpy.s_". Standard broadcasting
+ selections module) or the output of ``numpy.s_``. Standard broadcasting
is supported.
.. method:: resize(shape, axis=None)
@@ -856,10 +838,4 @@ Reference
NumPy dtype representation of this type
-
-
-
-
-
-
diff --git a/h5py/__init__.py b/h5py/__init__.py
index 3990e4e..434180f 100644
--- a/h5py/__init__.py
+++ b/h5py/__init__.py
@@ -31,9 +31,11 @@ except ImportError, e:
import utils, h5, h5a, h5d, h5f, h5fd, h5g, h5i, h5p, h5r, h5s, h5t, h5z, highlevel, version
-from highlevel import File, Group, Dataset, Datatype, AttributeManager, is_hdf5
+from highlevel import File, Group, Dataset, Datatype, AttributeManager, is_hdf5, CoordsList
from h5 import H5Error, get_config
+import filters, selections
+
__doc__ = __doc__ % (version.version, version.hdf5_version, version.api_version)
__all__ = ['h5', 'h5f', 'h5g', 'h5s', 'h5t', 'h5d', 'h5a', 'h5p', 'h5r',
diff --git a/h5py/highlevel.py b/h5py/highlevel.py
index 731c1d5..f8ab1d7 100644
--- a/h5py/highlevel.py
+++ b/h5py/highlevel.py
@@ -48,7 +48,6 @@ import os
import numpy
import threading
import sys
-import warnings
import os.path as op
import posixpath as pp
@@ -56,6 +55,7 @@ import posixpath as pp
from h5py import h5, h5f, h5g, h5s, h5t, h5d, h5a, h5p, h5z, h5i
from h5py.h5 import H5Error
import h5py.selections as sel
+from h5py.selections import CoordsList
import filters
diff --git a/h5py/selections.py b/h5py/selections.py
index 9cd5677..bc6f2a2 100644
--- a/h5py/selections.py
+++ b/h5py/selections.py
@@ -524,5 +524,11 @@ def _translate_slice(exp, length):
return start, count, step
+def CoordsList(*args, **kwds):
+
+ raise NotImplementedError("CoordsList indexing is unavailable as of 1.1.\n"
+ "Please use the selections module instead")
+
+
diff --git a/h5py/tests/__init__.py b/h5py/tests/__init__.py
index 648cfaf..088fabf 100644
--- a/h5py/tests/__init__.py
+++ b/h5py/tests/__init__.py
@@ -15,7 +15,7 @@ def runtests():
import nose
except ImportError:
raise ImportError("python-nose is required to run unit tests")
- nose.run()
+ nose.run('h5py.tests')
def autotest():
try:
diff --git a/h5py/tests/test_slicing.py b/h5py/tests/test_slicing.py
index 528f742..8b8df6f 100644
--- a/h5py/tests/test_slicing.py
+++ b/h5py/tests/test_slicing.py
@@ -41,7 +41,8 @@ class TestSlicing(object):
def test_slices(self):
# Test interger, slice, array and list indices
- dset, arr = self.generate((10,10,50),'f')
+ shape = (10,10,50)
+ dset, arr = self.generate(shape,'f')
slices = [s[0,0,0], s[0,0,:], s[0,:,0], s[0,:,:]]
slices += [s[0:1,:,4:5], s[2:3,0,4:5], s[:,0,0:1], s[0,:,0:1]]
@@ -58,17 +59,27 @@ class TestSlicing(object):
for slc in slices:
+ print "slice %s" % (slc,)
+
+ print " write"
arr[slc] += np.random.rand()
dset[slc] = arr[slc]
-
- print "check write %s" % (slc,)
assert_arr_equal(dset, arr)
+ print " read"
out = dset[slc]
-
- print "check read %s" % (slc,)
assert_arr_equal(out, arr[slc])
+ print " write direct"
+ arr[slc] += np.random.rand()
+ dset.write_direct(arr, slc, slc)
+ assert_arr_equal(dset, arr)
+
+ print " read direct"
+ out = np.ndarray(shape, 'f')
+ dset.read_direct(out, slc, slc)
+ assert_arr_equal(out[slc], arr[slc])
+
def test_slices_big(self):
# Test slicing behavior for indices larger than 2**32
@@ -110,7 +121,8 @@ class TestSlicing(object):
dset, arr = self.generate((20,10,30),'f')
dset[...] = arr[...]
- slices = [(s[...], (30,)),
+ slices = [(s[...], ()),
+ (s[...], (30,)),
(s[...], (10,30)),
(s[:,5,:], (20,30)),
(s[:,4,:], (30,)),
@@ -151,7 +163,4 @@ class TestSlicing(object):
-
-
-
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/h5py.git
More information about the debian-science-commits
mailing list