[h5py] 178/455: Major docs update

Ghislain Vaillant ghisvail-guest at moszumanska.debian.org
Thu Jul 2 18:19:30 UTC 2015


This is an automated email from the git hooks/post-receive script.

ghisvail-guest pushed a commit to annotated tag 1.3.0
in repository h5py.

commit 3e2161ae9ac2c7bbdb2b84107cc77ebc17f20be0
Author: andrewcollette <andrew.collette at gmail.com>
Date:   Sat Dec 6 23:33:33 2008 +0000

    Major docs update
---
 docs/source/_static/h5py.css   |   2 +
 docs/source/guide/hdf5.rst     |  10 ++
 docs/source/guide/hl.rst       | 234 +++++++++++++++++++++--------------------
 docs/source/guide/licenses.rst |  33 ++++++
 docs/source/guide/quick.rst    | 175 ++++++++++++++++--------------
 setup.py                       |   6 +-
 6 files changed, 266 insertions(+), 194 deletions(-)

diff --git a/docs/source/_static/h5py.css b/docs/source/_static/h5py.css
index cf5da6d..4a9fe0b 100644
--- a/docs/source/_static/h5py.css
+++ b/docs/source/_static/h5py.css
@@ -316,6 +316,8 @@ h2 {
     margin: 1.3em 0 0.2em 0;
     font-size: 1.35em;
     padding: 0;
+    padding-bottom: 4px;
+    border-bottom: 2px solid #444;
 }
 
 h3 {
diff --git a/docs/source/guide/hdf5.rst b/docs/source/guide/hdf5.rst
new file mode 100644
index 0000000..153c052
--- /dev/null
+++ b/docs/source/guide/hdf5.rst
@@ -0,0 +1,10 @@
+.. _hdf5:
+
+****************
+Overview of HDF5
+****************
+
+The final authority on all things HDF-related is 
+`the HDF group <http://www.hdfgroup.org>`.
+
+
diff --git a/docs/source/guide/hl.rst b/docs/source/guide/hl.rst
index 71f1c58..b89488b 100644
--- a/docs/source/guide/hl.rst
+++ b/docs/source/guide/hl.rst
@@ -1,9 +1,9 @@
 
 .. _h5pyreference:
 
-*************
-Documentation
-*************
+***********************
+Reference Documentation
+***********************
 
 .. module:: h5py.highlevel
 
@@ -67,9 +67,10 @@ function ``h5py.get_config()``.  This object supports the following attributes:
 Threading
 ---------
 
-H5py is now always thread-safe.  As HDF5 does not support thread-level
-concurrency (and as it is not necessarily thread-safe), only one thread
-at a time can acquire the lock which manages access to the library.
+H5py is now always thread-safe.  However, as HDF5 does not support thread-level
+concurrency (and as it is not necessarily thread-safe), access to the library
+is automatically serialized.  The GIL is released around read/write operations
+so that non-HDF5 threads (GUIs, computation) can continue to execute.
 
 File compatibility
 ------------------
@@ -89,7 +90,6 @@ small, named bits of data.  :class:`Group`, :class:`Dataset` and even
 behavior, named ``<obj>.attrs``.  This is the correct way to store metadata
 in HDF5 files.
 
---------------------------------------------------------
 
 File Objects
 ============
@@ -122,18 +122,16 @@ the end of the block, even if an exception has been raised::
     ...
     >>> # file_obj is guaranteed closed at end of block
 
-.. note::
-
-    In addition to the methods and properties listed below, File objects also
-    have all the methods and properties of :class:`Group` objects.  In this
-    case the group in question is the HDF5 *root group* (``/``).
 
 Reference
 ---------
 
 .. class:: File
 
-    Represents an HDF5 file on disk.
+    Represents an HDF5 file on disk, and provides access to the root
+    group (``/``).
+
+    See also :class:`Group`, of which this is a subclass.
 
     .. attribute:: name
 
@@ -141,7 +139,7 @@ Reference
 
     .. attribute:: mode
 
-        Mode used to open file
+        Mode (``r``, ``w``, etc) used to open file
 
     .. method:: __init__(name, mode='a')
         
@@ -149,8 +147,8 @@ Reference
 
     .. method:: close()
 
-        Close the file.  Like Python files, you should call this when
-        finished to be sure your data is saved.
+        Close the file.  As with Python files, it's good practice to call
+        this when you're done.
 
     .. method:: flush()
 
@@ -184,15 +182,15 @@ different behavior; see :meth:`Group.__setitem__` for details.
 
 In addition, the following behavior approximates the Python dictionary API:
 
-    - Container syntax (``if name in group``)
-    - Iteration yields member names (``for name in group``)
-    - Length (``len(group)``)
-    - :meth:`listnames <Group.listnames>`
-    - :meth:`iternames <Group.iternames>`
-    - :meth:`listobjects <Group.listobjects>`
-    - :meth:`iterobjects <Group.iterobjects>`
-    - :meth:`listitems <Group.listitems>`
-    - :meth:`iteritems <Group.iteritems>`
+- Container syntax (``if name in group``)
+- Iteration yields member names (``for name in group``)
+- Length (``len(group)``)
+- :meth:`listnames <Group.listnames>`
+- :meth:`iternames <Group.iternames>`
+- :meth:`listobjects <Group.listobjects>`
+- :meth:`iterobjects <Group.iterobjects>`
+- :meth:`listitems <Group.listitems>`
+- :meth:`iteritems <Group.iteritems>`
 
 Reference
 ---------
@@ -280,23 +278,26 @@ Reference
         **data** (None or ndarray)
             Either a NumPy ndarray or anything that can be converted to one.
 
-        Keywords:
+        Keywords (see :ref:`dsetfeatures`):
 
-        **chunks** (None or tuple)
-            Manually specify a chunked layout for the dataset.  It's
-            recommended you let the library determine this value for you.
+        **chunks** (None, True or shape tuple)
+            Store the dataset in chunked format.  Automatically
+            selected if any of the other keyword options are given.  If you
+            don't provide a shape tuple, the library will guess one for you.
 
-        **compression** (None or int)
-            Enable DEFLATE (gzip) compression, at this integer value.
+        **compression** (None or int[0-9])
+            Enable dataset compression.  Currently only gzip (DEFLATE)
+            compression is supported, at the given level.
 
         **shuffle** (True/False)
-            Enable the shuffle filter, which can provide higher compression ratios
-            when used with the compression filter.
-        
+            Enable the shuffle filte.  When used in conjunction with the
+            *compression* keyword, can increase the compression ratio.
+
         **fletcher32** (True/False)
-            Enable error detection.
+            Enable Fletcher32 error detection; may be used in addition to
+            compression.
 
-        **maxshape** (None or tuple)
+        **maxshape** (None or shape tuple)
             Make the dataset extendable, up to this maximum shape.  Should be a
             NumPy-style shape tuple.  Dimensions with value None have no upper
             limit.
@@ -310,8 +311,13 @@ Reference
         instead.  The additional keyword arguments are only honored when actually
         creating a dataset; they are ignored for the comparison.
 
+        If an existing incompatible object (Group or Datatype) already exists
+        with the given name, fails with H5Error.
+
     .. method:: copy(source, dest)
 
+        **Only available with HDF5 1.8**
+
         Recusively copy an object from one location to another, or between files.
 
         Copies the given object, and (if it is a group) all objects below it in
@@ -324,10 +330,10 @@ Reference
             Destination.  Must be either Group or path.  If a Group object, it may
             be in a different file.
 
-        **Only available with HDF5 1.8.X**
-
     .. method:: visit(func) -> None or return value from func
 
+        **Only available with HDF5 1.8**
+
         Recursively iterate a callable over objects in this group.
 
         You supply a callable (function, method or callable object); it
@@ -347,10 +353,10 @@ Reference
             >>> list_of_names = []
             >>> f.visit(list_of_names.append)
 
-        **Only available with HDF5 1.8.X.**
-
     .. method:: visititems(func) -> None or return value from func
 
+        **Only available with HDF5 1.8**
+
         Recursively visit names and objects in this group and subgroups.
 
         You supply a callable (function, method or callable object); it
@@ -374,8 +380,6 @@ Reference
             >>> f = File('foo.hdf5')
             >>> f.visititems(func)
 
-        **Only available with HDF5 1.8.X.**
-
     .. method:: __len__
 
         Number of group members
@@ -412,6 +416,7 @@ Reference
 
         Get an iterator over (name, object) pairs for the members of this group.
 
+.. _datasets:
 
 Datasets
 ========
@@ -430,8 +435,7 @@ as HDF5 attributes.
 
 Datasets are created using either :meth:`Group.create_dataset` or
 :meth:`Group.require_dataset`.  Existing datasets should be retrieved using
-the group indexing syntax (``dset = group["name"]``). Calling the constructor
-directly is not recommended.
+the group indexing syntax (``dset = group["name"]``).
 
 A subset of the NumPy indexing techniques is supported, including the
 traditional extended-slice syntax, named-field access, and boolean arrays.
@@ -448,11 +452,65 @@ Like Numpy arrays, Dataset objects have attributes named "shape" and "dtype":
     (4L, 5L)
 
 
-.. _slicing_access:
+.. _dsetfeatures:
 
-Special Features
+Special features
 ----------------
 
+Unlike memory-resident NumPy arrays, HDF5 datasets support a number of optional
+features.  These are enabled by the keywords provided to
+:meth:`Group.create_dataset`.  Some of the more useful are:
+
+Compression
+    Transparent compression 
+    (keyword *compression*)
+    can substantially reduce the storage space
+    needed for the dataset.  The default compression method is GZIP (DEFLATE),
+    which is universally supported by other installations of HDF5.
+    Supply an integer between 0 and 9 to enable GZIP compression at that level.
+    Using the *shuffle* filter along with this option can improve the
+    compression ratio further.
+
+Error-Detection
+    All versions of HDF5 include the *fletcher32* checksum filter, which enables
+    read-time error detection for datasets.  If part of a dataset becomes
+    corrupted, a read operation on that section will immediately fail with
+    H5Error.
+
+Resizing
+    When using HDF5 1.8,
+    datasets can be resized, up to a maximum value provided at creation time.
+    You can specify this maximum size via the *maxshape* argument to
+    :meth:`create_dataset <Group.create_dataset>` or
+    :meth:`require_dataset <Group.require_dataset>`. Shape elements with the
+    value ``None`` indicate unlimited dimensions.
+
+    Later calls to :meth:`Dataset.resize` will modify the shape in-place::
+
+        >>> dset = grp.create_dataset((10,10), '=f8', maxshape=(None, None))
+        >>> dset.shape
+        (10, 10)
+        >>> dset.resize((20,20))
+        >>> dset.shape
+        (20, 20)
+
+    You can also resize a single axis at a time::
+
+        >>> dset.resize(35, axis=1)
+        >>> dset.shape
+        (20, 35)
+
+    Resizing an array with existing data works differently than in NumPy; if
+    any axis shrinks, the data in the missing region is discarded.  Data does
+    not "rearrange" itself as it does when resizing a NumPy array.
+
+    .. note::
+        Only datasets stored in "chunked" format can be resized.  This format
+        is automatically selected when any of the advanced storage options is
+        used, or a *maxshape* tuple is provided.  You can also force it to be
+        used by specifying ``chunks=True`` at creation time.
+
+.. _slicing_access:
 
 Slicing access
 --------------
@@ -507,6 +565,8 @@ The following restrictions exist:
 * Selection coordinates must be given in increasing order
 * Duplicate selections are ignored
 
+.. _sparse_selection:
+
 Sparse selection
 ----------------
 
@@ -536,7 +596,7 @@ custom "CoordsList" instance:
     (5,)
 
 Like boolean-array indexing, the result is a 1-D array.  The order in which
-points are selected is preserved.
+points are selected is preserved.  Duplicate points are ignored.
 
 .. note::
     Boolean-mask and CoordsList indexing rely on an HDF5 construct which
@@ -547,62 +607,6 @@ points are selected is preserved.
     example, it takes 40MB to express a 1-million point selection on a rank-3
     array.  Be careful, especially with boolean masks.
 
-Special features
-----------------
-
-Unlike memory-resident NumPy arrays, HDF5 datasets support a number of optional
-features.  These are enabled by the keywords provided to
-:meth:`Group.create_dataset`.  Some of the more useful are:
-
-Compression
-    Transparent compression 
-    (keyword *compression*)
-    can substantially reduce the storage space
-    needed for the dataset.  The default compression method is GZIP (DEFPLATE),
-    which is universally supported by other installations of HDF5.
-    Supply an integer between 0 and 9 to enable GZIP compression at that level.
-    Using the *shuffle* filter along with this option can improve the
-    compression ratio further.
-
-Error-Detection
-    All versions of HDF5 include the *fletcher32* checksum filter, which enables
-    read-time error detection for datasets.  If part of a dataset becomes
-    corrupted, a read operation on that section will immediately fail with
-    H5Error.
-
-Resizing
-    When using HDF5 1.8,
-    datasets can be resized, up to a maximum value provided at creation time.
-    You can specify this maximum size via the *maxshape* argument to
-    :meth:`create_dataset <Group.create_dataset>` or
-    :meth:`require_dataset <Group.require_dataset>`. Shape elements with the
-    value ``None`` indicate unlimited dimensions.
-
-    Later calls to :meth:`Dataset.resize` will modify the shape in-place::
-
-        >>> dset = grp.create_dataset((10,10), '=f8', maxshape=(None, None))
-        >>> dset.shape
-        (10, 10)
-        >>> dset.resize((20,20))
-        >>> dset.shape
-        (20, 20)
-
-    You can also resize a single axis at a time::
-
-        >>> dset.resize(35, axis=1)
-        >>> dset.shape
-        (20, 35)
-
-    Resizing an array with existing data works differently than in NumPy; if
-    any axis shrinks, the data in the missing region is discarded.  Data does
-    not "rearrange" itself as it does when resizing a NumPy array.
-
-    .. note::
-        Only datasets stored in "chunked" format can be resized.  This format
-        is automatically selected when any of the advanced storage options is
-        used, or a *maxshape* tuple is provided.  You can also force it to be
-        used by specifying ``chunks=True`` at creation time.
-
 
 Value attribute and scalar datasets
 -----------------------------------
@@ -727,21 +731,21 @@ has a small proxy object (:class:`AttributeManager`) attached to it as
 ``<obj>.attrs``.  This dictionary-like object works like a :class:`Group`
 object, with the following differences:
 
-    - Entries may only be scalars and NumPy arrays
-    - Each attribute must be small (recommended < 64k for HDF5 1.6)
-    - No partial I/O (i.e. slicing) is allowed for arrays
+- Entries may only be scalars and NumPy arrays
+- Each attribute must be small (recommended < 64k for HDF5 1.6)
+- No partial I/O (i.e. slicing) is allowed for arrays
 
 They support the same dictionary API as groups, including the following:
 
-    - Container syntax (``if name in obj.attrs``)
-    - Iteration yields member names (``for name in obj.attrs``)
-    - Number of attributes (``len(obj.attrs)``)
-    - :meth:`listnames <AttributeManager.listnames>`
-    - :meth:`iternames <AttributeManager.iternames>`
-    - :meth:`listobjects <AttributeManager.listobjects>`
-    - :meth:`iterobjects <AttributeManager.iterobjects>`
-    - :meth:`listitems <AttributeManager.listitems>`
-    - :meth:`iteritems <AttributeManager.iteritems>`
+- Container syntax (``if name in obj.attrs``)
+- Iteration yields member names (``for name in obj.attrs``)
+- Number of attributes (``len(obj.attrs)``)
+- :meth:`listnames <AttributeManager.listnames>`
+- :meth:`iternames <AttributeManager.iternames>`
+- :meth:`listobjects <AttributeManager.listobjects>`
+- :meth:`iterobjects <AttributeManager.iterobjects>`
+- :meth:`listitems <AttributeManager.listitems>`
+- :meth:`iteritems <AttributeManager.iteritems>`
 
 Reference
 ---------
diff --git a/docs/source/guide/licenses.rst b/docs/source/guide/licenses.rst
index 97e946e..68523f3 100644
--- a/docs/source/guide/licenses.rst
+++ b/docs/source/guide/licenses.rst
@@ -154,4 +154,37 @@ PyTables Copyright Statement
     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
+stdint.h (Windows version) License
+==================================
+
+::
+
+    Copyright (c) 2006-2008 Alexander Chemeris
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions are met:
+
+      1. Redistributions of source code must retain the above copyright notice,
+         this list of conditions and the following disclaimer.
+
+      2. Redistributions in binary form must reproduce the above copyright
+         notice, this list of conditions and the following disclaimer in the
+         documentation and/or other materials provided with the distribution.
+
+      3. The name of the author may be used to endorse or promote products
+         derived from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+    WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+    MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+    EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+    PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+    OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, 
+    WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+    OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+    ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+
 
diff --git a/docs/source/guide/quick.rst b/docs/source/guide/quick.rst
index 453a0d5..bf7836c 100644
--- a/docs/source/guide/quick.rst
+++ b/docs/source/guide/quick.rst
@@ -4,6 +4,15 @@
 Quick Start Guide
 *****************
 
+This document is a very quick overview of both HDF5 and h5py.  More
+comprehensive documentation is available at:
+
+* :ref:`h5pyreference`
+
+The `HDF Group <http://www.hdfgroup.org>`_ is the final authority on HDF5.
+They also have an `introductory tutorial <http://www.hdfgroup.org/HDF5/Tutor/>`_
+which provides a good introduction.
+
 What is HDF5?
 =============
 
@@ -61,9 +70,7 @@ Files are opened using a Python-file-like syntax::
 
     >>> f = File("myfile.hdf5", 'w')    # Create/truncate file
     >>> f
-    File "myfile.hdf5", root members:
-    >>> type(f)
-    <class 'h5py.highlevel.File'>
+    <HDF5 file "myfile.hdf5" (mode w, 0 root members)>
 
 In the filesystem metaphor of HDF5, the file object does double duty as the
 *root group* (named "/" like its POSIX counterpart).  You can store datasets
@@ -72,31 +79,32 @@ in it directly, or create subgroups to keep your data better organized.
 Create a dataset
 ----------------
 
-Datasets are like Numpy arrays which reside on disk; they are associated with
-a name, shape, and a Numpy dtype.  The easiest way to create them is with a
-method of the File object you already have::
+Datasets are like Numpy arrays which reside on disk; you create them by
+providing at least a name and a shape.  Here's an example::
 
-    >>> dset = f.create_dataset("MyDataset", (2,3), '=i4')
+    >>> dset = f.create_dataset("MyDataset", (2,3), '=i4')  # dtype is optional
     >>> dset
-    Dataset "MyDataset": (2L, 3L) dtype('int32')
-    >>> type(dset)
-    <class 'h5py.highlevel.Dataset'>
+    <HDF5 dataset "MyDataset": shape (2, 3), type "<i4">
 
 This creates a new 2-d 6-element (2x3) dataset containing 32-bit signed integer
 data, in native byte order, located in the root group at "/MyDataset".
 
-Or you can auto-create a dataset from an array, just by giving it a name:
-
-    >>> arr = numpy.ones((2,3), '=i4')
-    >>> f["MyDataset"] = arr
-    >>> dset = f["MyDataset"]
-
-Shape and dtype information is always available via properties:
+Some familiar NumPy attributes are included::
 
+    >>> dset.shape
+    (2, 3)
     >>> dset.dtype
     dtype('int32')
-    >>> dset.shape
-    (2L, 3L)
+
+This dataset, like every object in an HDF5 file, has a name::
+
+    >>> dset.name
+    '/MyDataset'
+
+If you already have a NumPy array you want to store, just hand it off to h5py::
+
+    >>> arr = numpy.ones((2,3), '=i4')
+    >>> dset = f.create_dataset('MyDataset', data=arr)
 
 Read & write data
 -----------------
@@ -113,6 +121,19 @@ You can now store data in it using Numpy-like slicing syntax::
     [[1 0 0]
      [1 0 0]]
 
+The following slice mechanisms are supported (see :ref:`datasets` for more):
+
+    * Integers/slices (``array[2:11:3]``, etc)
+    * Ellipsis indexing (``array[2,...,4:7]``)
+    * Simple broadcasting (``array[2]`` is equivalent to ``array[2,...]``)
+    * Index lists (``array[ 2, [0,1,4,6] ]``)
+
+along with some emulated advanced indexing features
+(see :ref:`sparse_selection`):
+
+    * Boolean array indexing (``array[ array[...] > 0.5 ]``)
+    * Discrete coordinate selection (
+
 Closing the file
 ----------------
 
@@ -120,58 +141,67 @@ You don't need to do anything special to "close" datasets.  However, as with
 Python files you should close the file before exiting::
 
     >>> dset
-    Dataset "MyDataset": (2L, 3L) dtype('int32')
+    <HDF5 dataset "MyDataset": shape (2, 3), type "<i4">
     >>> f.close()
+    >>> f
+    <Closed HDF5 file>
     >>> dset
-    Invalid dataset
+    <Closed HDF5 dataset>
+
+H5py tries to close all objects on exit (or when they are no longer referenced),
+but it's good practice to close your files anyway.
 
 
 Groups & multiple objects
 =========================
 
-You've already seen that every object in a file is identified by a name:
+When creating the dataset above, we gave it a name::
 
-    >>> f["DS1"] = numpy.ones((2,3))    # full name "/DS1"
-    >>> f["DS2"] = numpy.ones((1,2))    # full name "/DS2"
-    >>> f
-    File "myfile.hdf5", root members: "DS1", "DS2"
-
-Groups, including the root group ("f", in this example), act somewhat like
-Python dictionaries.  They support iteration and membership testing:
-    
-    >>> list(f)
-    ['DS1', 'DS2']
-    >>> dict(x, y.shape for x, y in f.iteritems())
-    {'DS1': (2,3), 'DS2': (1,2)}
-    >>> "DS1" in f
-    True
-    >>> "FOOBAR" in f
-    False
+    >>> dset.name
+    '/MyDataset'
+
+This bears a suspicious resemblance to a POSIX filesystem path; in this case,
+we say that MyDataset resides in the *root group* (``/``) of the file.  You
+can create other groups as well::
+
+    >>> subgroup = f.create_group("SubGroup")
+    >>> subgroup.name
+    '/SubGroup'
 
-You can "delete" (unlink) an object from a group::
+They can in turn contain new datasets or additional groups::
 
-    >>> f["DS"] = numpy.ones((10,10))
-    >>> f["DS"]
-    Dataset "DS": (10L, 10L) dtype('float64')
-    >>> "DS" in f
+    >>> dset2 = subgroup.create_dataset('MyOtherDataset', (4,5), '=f8')
+    >>> dset2.name
+    '/SubGroup/MyOtherDataset'
+
+You can access the contents of groups using dictionary-style syntax, using
+POSIX-style paths::
+
+    >>> dset2 = subgroup['MyOtherDataset']
+    >>> dset2 = f['/SubGroup/MyOtherDataset']   # equivalent
+
+Groups (including File objects; "f" in this example) support other
+dictionary-like operations::
+
+    >>> list(f)                 # iteration
+    ['MyDataset', 'SubGroup']
+    >>> 'MyDataset' in f        # membership testing
     True
-    >>> del f["DS"]
-    >>> "DS" in f
-    False
-
-Most importantly, you can create additional subgroups by giving them names:
-
-    >>> g = f.create_group('subgrp')
-    >>> g
-    Group "subgrp" (0 members)
-    >>> g.name
-    '/subgrp'
-    >>> dset = g.create_dataset("DS3", (10,10))
-    >>> dset.name
-    '/subgrp/DS3'
+    >>> 'Subgroup/MyOtherDataset' in f      # even for arbitrary paths!
+    True
+    >>> del f['MyDataset']      # Delete (unlink) a group member
+
+As a safety feature, you can't create an object with a pre-existing name;
+you have to manually delete the existing object first::
+
+    >>> grp = f.create_group("NewGroup")
+    >>> grp2 = f.create_group("NewGroup")   # wrong
+    (H5Error raised)
+    >>> del f['NewGroup']
+    grp2 = f.create_group("NewGroup")
 
-Using this feature you can build up an entire virtual filesystem inside an
-HDF5 file.  This hierarchical organization is what gives HDF5 its name.
+This restriction reflects HDF5's lack of transactional support, and will not
+change.
 
 .. note::
 
@@ -179,21 +209,18 @@ HDF5 file.  This hierarchical organization is what gives HDF5 its name.
     groups; you can't yet do ``f.create_group('foo/bar/baz')`` unless both
     groups "foo" and "bar" already exist.
 
-
 Attributes
 ==========
 
 HDF5 lets you associate small bits of data with both groups and datasets.
-This can be used for metadata like descriptive titles, timestamps, or any
-other purpose you want.
+This can be used for metadata like descriptive titles or timestamps.
 
 A dictionary-like object which exposes this behavior is attached to every
 Group and Dataset object as the attribute ``attrs``.  You can store any scalar
 or array value you like::
 
-    >>> dset = f.create_dataset("MyDS", (2,3), '=i4')
     >>> dset.attrs
-    Attributes of "MyDS": (none)
+    <Attributes of HDF5 object "MyDataset" (0)>
     >>> dset.attrs["Name"] = "My Dataset"
     >>> dset.attrs["Frob Index"] = 4
     >>> dset.attrs["Order Array"] = numpy.arange(10)
@@ -204,6 +231,10 @@ or array value you like::
     Frob Index: 4
     Order Array: [0 1 2 3 4 5 6 7 8 9]
 
+Attribute proxy objects support the same dictionary-like API as groups, but
+unlike group members, you can directly overwrite existing attributes:
+
+    >>> dset.attrs["Name"] = "New Name"
 
 Named datatypes
 ===============
@@ -214,6 +245,8 @@ object in any group, simply by assigning a NumPy dtype to a name:
 
     >>> f["MyIntegerDatatype"] = numpy.dtype('<i8')
     >>> htype = f["MyIntegerDatatype"]
+    >>> htype
+    <HDF5 named type "MyIntegerDatatype" (dtype <i8)>
     >>> htype.dtype
     dtype('int64')
 
@@ -221,20 +254,6 @@ This isn't ordinarily useful because each dataset already carries its own
 dtype attribute.  However, if you want to store datatypes which are not used
 in any dataset, this is the right way to do it.
 
-More information
-================
-
-See the :ref:`reference chapter <h5pyreference>` for complete documentation of
-high-level interface objects like Groups and Datasets.
-
-The `HDF Group`__ is the final authority on HDF5.  Their `user
-manual`__ is a great introduction to the basic concepts of HDF5, albeit from
-the perspective of a C programmer.
-
-__ http://www.hdfgroup.org/HDF5/
-__ http://www.hdfgroup.org/HDF5/doc/UG/index.html
-
-
 
 
 
diff --git a/setup.py b/setup.py
index c634577..aaa50ce 100644
--- a/setup.py
+++ b/setup.py
@@ -260,7 +260,7 @@ class cybuild(build):
 
         # For commands test and doc, which need to know about this build
         with open(PICKLE_FILE,'w') as f:
-            pickle.dump((modules, extensions, self.build_lib), f)
+            pickle.dump((modules, extensions, op.abspath(self.build_lib)), f)
 
     def get_hdf5_version(self):
         """ Try to determine the installed HDF5 version.
@@ -419,6 +419,10 @@ class doc(Command):
         except (IOError, OSError):
             fatal("Project must be built before docs can be compiled")
 
+        pth = op.abspath(pth)
+
+        print "Loading from %s" % pth
+
         if self.rebuild and op.exists('docs/build'):
             shutil.rmtree('docs/build')
 

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/h5py.git



More information about the debian-science-commits mailing list