[h5py] 289/455: Cleanup & docs updates

Ghislain Vaillant ghisvail-guest at moszumanska.debian.org
Thu Jul 2 18:19:43 UTC 2015


This is an automated email from the git hooks/post-receive script.

ghisvail-guest pushed a commit to annotated tag 1.3.0
in repository h5py.

commit ee43c737bfe30afafd0b7d5c9634ea11865f1ec5
Author: andrewcollette <andrew.collette at gmail.com>
Date:   Fri Jun 19 22:24:52 2009 +0000

    Cleanup & docs updates
---
 docs/source/api/index.rst   |  4 ++--
 docs/source/guide/build.rst |  3 ++-
 docs/source/guide/file.rst  |  9 ++++-----
 docs/source/guide/group.rst | 19 +++++++++----------
 docs/source/guide/index.rst |  4 ++--
 docs/source/guide/quick.rst | 15 ++++++++-------
 docs/source/index.rst       |  4 ----
 h5py/__init__.py            | 22 ++++++++++------------
 h5py/h5.pyx                 |  2 +-
 h5py/highlevel.py           | 45 ++++++++++++++++++++++-----------------------
 10 files changed, 60 insertions(+), 67 deletions(-)

diff --git a/docs/source/api/index.rst b/docs/source/api/index.rst
index cd30092..64e4bf6 100644
--- a/docs/source/api/index.rst
+++ b/docs/source/api/index.rst
@@ -1,9 +1,9 @@
 
 .. _api_ref:
 
-***********************
+#######################
 Low-level API reference
-***********************
+#######################
 
 While the :ref:`high-level component <user_guide>` provides a friendly,
 NumPy-like interface
diff --git a/docs/source/guide/build.rst b/docs/source/guide/build.rst
index 06057b2..ade3b5e 100644
--- a/docs/source/guide/build.rst
+++ b/docs/source/guide/build.rst
@@ -71,6 +71,7 @@ Requires
 .. _Numpy: http://numpy.scipy.org/
 .. _HDF5: http://www.hdfgroup.com/HDF5
 
+Please note that Cython (or Pyrex) is *not* required to build h5py.
 
 Quick installation
 ------------------
@@ -127,7 +128,7 @@ Testing
 =======
 
 Running unit tests can help diagnose problems unique to your platform or
-software configuration.  For the Unix version of h5py, running the command:
+software configuration.  For the Unix version of h5py, running the command::
 
     $ python setup.py test
 
diff --git a/docs/source/guide/file.rst b/docs/source/guide/file.rst
index 2606c55..6faf9cc 100644
--- a/docs/source/guide/file.rst
+++ b/docs/source/guide/file.rst
@@ -8,12 +8,12 @@ Opening & creating files
 ------------------------
 
 HDF5 files work generally like standard Python file objects.  They support
-standand modes like r/w/a, and should be closed when they are no longer in
+standard modes like r/w/a, and should be closed when they are no longer in
 use.  However, there is obviously no concept of "text" vs "binary" mode.
 
     >>> f = h5py.File('myfile.hdf5','r')
 
-Valid modes are:
+The file name may be either ``str`` or ``unicode``. Valid modes are:
 
     ===  ================================================
      r   Readonly, file must exist
@@ -23,7 +23,6 @@ Valid modes are:
      a   Read/write if exists, create otherwise (default)
     ===  ================================================
 
-The file name may also be a Unicode string.
 
 File drivers
 ------------
@@ -96,8 +95,8 @@ the full API of Group objects; in this case, the group in question is the
 
     .. attribute:: filename
 
-        HDF5 filename on disk.  This is a plain string (str) for ASCII names,
-        Unicode otherwise.
+        HDF5 filename on disk.  This is a plain string (``str``) for ASCII
+        names, ``unicode`` otherwise.
 
     .. attribute:: mode
 
diff --git a/docs/source/guide/group.rst b/docs/source/guide/group.rst
index aa08204..22a408c 100644
--- a/docs/source/guide/group.rst
+++ b/docs/source/guide/group.rst
@@ -7,11 +7,11 @@ Creating and using groups
 
 Groups are the container mechanism by which HDF5 files are organized.  From
 a Python perspective, they operate somewhat like dictionaries.  In this case
-the "keys" are the names of group entries, and the "values" are the entries
+the "keys" are the names of group members, and the "values" are the members
 themselves (:class:`Group` and :class:`Dataset`) objects.
 
 Group objects also contain most of the machinery which makes HDF5 useful.
-The :ref:`File object <hlfile>` does double duty as the HDF5 `root group`, and
+The :ref:`File object <hlfile>` does double duty as the HDF5 *root group*, and
 serves as your entry point into the file:
 
     >>> f = h5py.File('foo.hdf5','w')
@@ -159,18 +159,17 @@ Reference
         **data** (None or ndarray)
             Either a NumPy ndarray or anything that can be converted to one.
 
-        Keywords (see :ref:`dsetfeatures`):
+        Keywords (see also Dataset :ref:`dsetfeatures`):
 
         **chunks** (None, True or shape tuple)
             Store the dataset in chunked format.  Automatically
             selected if any of the other keyword options are given.  If you
             don't provide a shape tuple, the library will guess one for you.
-            Chunk sizes of 100kB-300kB work best with HDF5. 
+            Chunk sizes of 300kB and smaller work best with HDF5. 
 
         **compression** (None, string ["gzip" | "lzf" | "szip"] or int 0-9)
             Enable dataset compression.  DEFLATE, LZF and (where available)
-            SZIP are supported.  An integer is interpreted as a GZIP level
-            for backwards compatibility
+            SZIP are supported.  An integer is interpreted as a GZIP level.
 
         **compression_opts** (None, or special value)
             Setting for compression filter; legal values for each filter
@@ -182,16 +181,16 @@ Reference
             "szip"      2-tuple ('ec'|'nn', even integer 0-32)
             ======      ======================================
 
-            See the ``filters`` module for a detailed description of each
-            of these filters.
+            See the ``filters`` module docstring for a more detailed
+            description of these filters.
 
         **shuffle** (True/False)
             Enable/disable data shuffling, which can improve compression
-            performance.
+            performance.  Default is False.
 
         **fletcher32** (True/False)
             Enable Fletcher32 error detection; may be used with or without
-            compression.
+            compression.  Default is False.
 
         **maxshape** (None or shape tuple)
             Make the dataset extendable, up to this maximum shape.  Should be a
diff --git a/docs/source/guide/index.rst b/docs/source/guide/index.rst
index 6ed0f3a..2ac751c 100644
--- a/docs/source/guide/index.rst
+++ b/docs/source/guide/index.rst
@@ -1,9 +1,9 @@
 
 .. _user_guide:
 
-**********
+##########
 User Guide
-**********
+##########
 
 .. toctree::
     :maxdepth: 2
diff --git a/docs/source/guide/quick.rst b/docs/source/guide/quick.rst
index e7df947..655dcec 100644
--- a/docs/source/guide/quick.rst
+++ b/docs/source/guide/quick.rst
@@ -8,6 +8,7 @@ This document is a very quick overview of both HDF5 and h5py.  More
 comprehensive documentation is available at:
 
 * :ref:`h5pyreference`
+* `The h5py FAQ (at Google Code) <http://code.google.com/p/h5py/wiki/FAQ>`_
 
 The `HDF Group <http://www.hdfgroup.org>`_ is the final authority on HDF5.
 They also have an `introductory tutorial <http://www.hdfgroup.org/HDF5/Tutor/>`_
@@ -42,8 +43,8 @@ specified using standard NumPy dtype objects.
 
 You don't need to know anything about the HDF5 library to use h5py, apart from
 the basic metaphors of files, groups and datasets.  The library handles all
-data conversion transparently, and datasets support advanced features like
-efficient multidimensional indexing and nested compound datatypes.
+data conversion transparently, and translates operations like slicing into
+the appropriate efficient HDF5 routines.
 
 One additional benefit of h5py is that the files it reads and writes are
 "plain-vanilla" HDF5 files.  No Python-specific metadata or features are used.
@@ -57,7 +58,7 @@ First, install h5py by following the :ref:`installation instructions <build>`.
 
 Since an example is worth a thousand words, here's how to make a new file,
 and create an integer dataset inside it.  The new dataset has shape (100, 100),
-is located in the file at "/MyDataset", and initialized to the value 42.
+is located in the file at ``"/MyDataset"``, and initialized to the value 42.
 
     >>> import h5py
     >>> f = h5py.File('myfile.hdf5')
@@ -74,7 +75,7 @@ The dataset object ``dset`` here represents a new 2-d HDF5 dataset.  Some
 features will be familiar to NumPy users::
 
     >>> dset.shape
-    (10, 2)
+    (100, 100)
     >>> dset.dtype
     dtype('int32')
 
@@ -85,13 +86,13 @@ You can even automatically create a dataset from an existing array:
     >>> dset = f.create_dataset('AnotherDataset', data=arr)
 
 HDF5 datasets support many other features, like chunking and transparent 
-compression.
+compression.  See the section ":ref:`datasets`" for more info.
 
 Getting your data back
 ----------------------
 
 You can store and retrieve data using Numpy-like slicing syntax.  The following
-slice mechanisms are supported (see :ref:`datasets` for more info):
+slice mechanisms are supported:
 
     * Integers/slices (``array[2:11:3]``, etc)
     * Ellipsis indexing (``array[2,...,4:7]``)
@@ -144,7 +145,7 @@ POSIX-style paths::
     >>> dset2 = subgroup['MyOtherDataset']
     >>> dset2 = f['/SubGroup/MyOtherDataset']   # equivalent
 
-Groups (including File objects; "f" in this example) support other
+Groups (including File objects; ``"f"`` in this example) support other
 dictionary-like operations::
 
     >>> list(f)
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 5a01809..513bbbd 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -1,10 +1,6 @@
 
 .. _home:
 
-***************
-HDF5 for Python
-***************
-
 The `HDF5 library <http://www.hdfgroup.com/HDF5>`_ is a versatile,
 mature library designed for the storage
 of numerical data.  The h5py package provides a simple, Pythonic interface to
diff --git a/h5py/__init__.py b/h5py/__init__.py
index 8ed6e8d..8573f51 100644
--- a/h5py/__init__.py
+++ b/h5py/__init__.py
@@ -10,8 +10,6 @@
 # 
 #-
 
-# h5py module __init__
-
 __doc__ = \
 """
     This is the h5py package, a Python interface to the HDF5 
@@ -21,6 +19,7 @@ __doc__ = \
 
     HDF5 %s (using %s API)
 """
+
 try:
     import h5
 except ImportError, e:
@@ -29,29 +28,28 @@ except ImportError, e:
         raise ImportError('Import error:\n"%s"\n\nBe sure to exit source directory before importing h5py' % e)
     raise
 
-import utils, h5, h5a, h5d, h5f, h5fd, h5g, h5i, h5p, h5r, h5s, h5t, h5z, highlevel, version
+# This is messy but much less frustrating when using h5py from IPython
+import h5, h5a, h5d, h5f, h5fd, h5g, h5l, h5o, h5i, h5p, h5r, h5s, h5t, h5z
+import highlevel, filters, selections, version
 
+# Re-export high-level interface to package level
 from highlevel import File, Group, Dataset, Datatype, AttributeManager, \
-                      is_hdf5, CoordsList, new_vlen, new_enum, get_vlen, get_enum
+                      is_hdf5, CoordsList, \
+                      new_vlen, new_enum, get_vlen, get_enum
 
 from h5 import get_config
 from h5e import H5Error
 
-import filters, selections
 
 __doc__ = __doc__ % (version.version, version.hdf5_version, version.api_version)
 
 __all__ = ['h5', 'h5f', 'h5g', 'h5s', 'h5t', 'h5d', 'h5a', 'h5p', 'h5r',
-           'h5z', 'h5i', 'version', 'File', 'Group', 'Dataset',
+           'h5o', 'h5l', 'h5z', 'h5i', 'version', 'File', 'Group', 'Dataset',
            'Datatype', 'AttributeManager', 'H5Error', 'get_config', 'is_hdf5']
 
-if version.api_version_tuple >= (1,8):
-    import h5o, h5l
-    __all__ += ['h5l', 'h5o']
-
 try:
-   import IPython as _IP
-   if _IP.ipapi.get() is not None:
+    import IPython as _IP
+    if _IP.ipapi.get() is not None:
        import _ipy_completer
        _ipy_completer.activate()
 except Exception:
diff --git a/h5py/h5.pyx b/h5py/h5.pyx
index 35f5626..10c5065 100644
--- a/h5py/h5.pyx
+++ b/h5py/h5.pyx
@@ -340,7 +340,7 @@ cdef class ObjectID:
         try:
             ref = str(H5Iget_ref(self.id)) if self._valid else "X"
             lck = "L" if self._locked else "U"
-            return "%s [%s] (%s) %d" % (self.__class__.__name__, ref, lck, self.id)
+            return "<%s [%s] (%s) %d>" % (self.__class__.__name__, ref, lck, self.id)
         finally:
             phil.release()
 
diff --git a/h5py/highlevel.py b/h5py/highlevel.py
index 04face9..c4b2ead 100644
--- a/h5py/highlevel.py
+++ b/h5py/highlevel.py
@@ -33,21 +33,19 @@ import sys
 import os.path as op
 import posixpath as pp
 
-from h5py import h5, h5f, h5g, h5s, h5t, h5d, h5a, h5p, h5z, h5i, h5fd
+from h5py import h5, h5f, h5g, h5s, h5t, h5d, h5a, \
+                 h5p, h5r, h5z, h5i, h5fd, h5o, h5l
 from h5py.h5 import H5Error
 import h5py.selections as sel
 from h5py.selections import CoordsList
 
 import version
-
 import filters
 
 config = h5.get_config()
-if config.API_18:
-    from h5py import h5o, h5l
 
-__all__ = ["File", "Group", "Dataset",
-           "Datatype", "AttributeManager"]
+__all__ = ["File", "Group", "Dataset", "Datatype",
+           "AttributeManager", "is_hdf5"]
 
 def _hbasename(name):
     """ Basename function with more readable handling of trailing slashes"""
@@ -117,8 +115,7 @@ class HLObject(_LockableObject):
     def parent(self):
         """Return the parent group of this object.
 
-        Beware; if multiple hard links to this object exist, there's no way
-        to predict which parent group will be returned!
+        This is always equivalent to file[posixpath.basename(obj.name)].
         """
         return self.file[pp.dirname(self.name)]
 
@@ -241,18 +238,18 @@ class Group(HLObject, _DictCompat):
 
         The action taken depends on the type of object assigned:
 
-        1. Named HDF5 object (Dataset, Group, Datatype):
+        Named HDF5 object (Dataset, Group, Datatype)
             A hard link is created in this group which points to the
             given object.
 
-        2. Numpy ndarray:
+        Numpy ndarray
             The array is converted to a dataset object, with default
             settings (contiguous storage, etc.).
 
-        3. Numpy dtype:
+        Numpy dtype
             Commit a copy of the datatype as a named datatype in the file.
 
-        4. Anything else:
+        Anything else
             Attempt to convert it to an ndarray and store it.  Scalar
             values are stored as scalar datasets. Raise ValueError if we
             can't understand the resulting array dtype.
@@ -311,13 +308,14 @@ class Group(HLObject, _DictCompat):
         """ Check if a group exists, and create it if not.  TypeError if an
         incompatible object exists.
         """
-        if not name in self:
-            return self.create_group(name)
-        else:
-            grp = self[name]
-            if not isinstance(grp, Group):
-                raise TypeError("Incompatible object (%s) already exists" % grp.__class__.__name__)
-            return grp
+        with self._lock:
+            if not name in self:
+                return self.create_group(name)
+            else:
+                grp = self[name]
+                if not isinstance(grp, Group):
+                    raise TypeError("Incompatible object (%s) already exists" % grp.__class__.__name__)
+                return grp
 
     def create_dataset(self, name, *args, **kwds):
         """ Create and return a new dataset.  Fails if "name" already exists.
@@ -669,7 +667,7 @@ class File(Group):
         with self._lock:
             try:
                 return '<HDF5 file "%s" (mode %s, %d root members)>' % \
-                    (os.path.basename(self.name), self.mode, len(self))
+                    (os.path.basename(self.filename), self.mode, len(self))
             except Exception:
                 return "<Closed HDF5 file>"
 
@@ -754,9 +752,10 @@ class Dataset(HLObject):
         
     @property
     def maxshape(self):
-        space = self.id.get_space()
-        dims = space.get_simple_extent_dims(True)
-        return tuple(x if x != h5s.UNLIMITED else None for x in dims)
+        with self._lock:
+            space = self.id.get_space()
+            dims = space.get_simple_extent_dims(True)
+            return tuple(x if x != h5s.UNLIMITED else None for x in dims)
 
     def __init__(self, group, name,
                     shape=None, dtype=None, data=None,

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/h5py.git



More information about the debian-science-commits mailing list