[h5py] 411/455: Docs update for 1.3
Ghislain Vaillant
ghisvail-guest at moszumanska.debian.org
Thu Jul 2 18:19:56 UTC 2015
This is an automated email from the git hooks/post-receive script.
ghisvail-guest pushed a commit to annotated tag 1.3.0
in repository h5py.
commit 835d9b0ec2b5b2e232df25963c874e8c9e69cc61
Author: andrewcollette <andrew.collette at gmail.com>
Date: Sat Feb 20 05:06:26 2010 +0000
Docs update for 1.3
---
docs/source/guide/attr.rst | 87 ++-----------
docs/source/guide/dataset.rst | 86 +++----------
docs/source/guide/file.rst | 80 +++---------
docs/source/guide/group.rst | 290 +++++++++++-------------------------------
docs/source/guide/hl.rst | 1 +
docs/source/guide/refs.rst | 126 ++++++++++++++++++
6 files changed, 248 insertions(+), 422 deletions(-)
diff --git a/docs/source/guide/attr.rst b/docs/source/guide/attr.rst
index b07c6c8..7c416a3 100644
--- a/docs/source/guide/attr.rst
+++ b/docs/source/guide/attr.rst
@@ -18,85 +18,26 @@ They support the same dictionary API as groups.
Reference
---------
-.. class:: AttributeManager
+.. autoclass:: h5py.AttributeManager
- .. method:: __getitem__(name) -> NumPy scalar or ndarray
+ .. automethod:: h5py.AttributeManager.__getitem__
+ .. automethod:: h5py.AttributeManager.__setitem__
+ .. automethod:: h5py.AttributeManager.__delitem__
- Retrieve an attribute given a string name.
+ .. automethod:: h5py.AttributeManager.create
+ .. automethod:: h5py.AttributeManager.modify
- .. method:: __setitem__(name, value)
+ **Inherited dictionary interface**
- Set an attribute. Value must be convertible to a NumPy scalar
- or array.
+ .. automethod:: h5py.highlevel._DictCompat.keys
+ .. automethod:: h5py.highlevel._DictCompat.values
+ .. automethod:: h5py.highlevel._DictCompat.items
- .. method:: __delitem__(name)
+ .. automethod:: h5py.highlevel._DictCompat.iterkeys
+ .. automethod:: h5py.highlevel._DictCompat.itervalues
+ .. automethod:: h5py.highlevel._DictCompat.iteritems
- Delete an attribute.
+ .. automethod:: h5py.highlevel._DictCompat.get
- .. method:: create(name, data, shape=None, dtype=None)
- Create an attribute, initializing it to the given value.
-
- name
- Name of the new attribute (required)
-
- data
- An array to initialize the attribute. Required.
-
- shape
- Shape of the attribute. Overrides data.shape if both are
- given. The total number of points must be unchanged.
-
- dtype
- Data type of the attribute. Overrides data.dtype if both
- are given. Must be conversion-compatible with data.dtype.
-
- .. method:: modify(name, value)
-
- Change the value of an attribute while preserving its type.
-
- Differs from __setitem__ in that the type of an existing attribute
- is preserved. Useful for interacting with externally generated files.
-
- If the attribute doesn't exist, it will be automatically created.
-
- .. method:: __len__
-
- Number of attributes
-
- .. method:: __iter__
-
- Yields the names of attributes
-
- .. method:: __contains__(name)
-
- See if the given attribute is present
-
- .. method:: keys
-
- Get a list of attribute names
-
- .. method:: iterkeys
-
- Get an iterator over attribute names
-
- .. method:: values
-
- Get a list with all attribute values
-
- .. method:: itervalues
-
- Get an iterator over attribute values
-
- .. method:: items
-
- Get an list of (name, value) pairs for all attributes.
-
- .. method:: iteritems
-
- Get an iterator over (name, value) pairs
-
- .. method:: get(name, default)
-
- Return the specified attribute, or default if it doesn't exist.
diff --git a/docs/source/guide/dataset.rst b/docs/source/guide/dataset.rst
index 7b6c0c8..c6626a0 100644
--- a/docs/source/guide/dataset.rst
+++ b/docs/source/guide/dataset.rst
@@ -197,101 +197,51 @@ dataset while iterating has undefined results.
Reference
---------
-.. class:: Dataset
+.. autoclass:: h5py.Dataset
- Represents an HDF5 dataset. All properties are read-only.
+ **Dataset properties**
- .. attribute:: name
+ .. autoattribute:: h5py.Dataset.shape
+ .. autoattribute:: h5py.Dataset.dtype
- Full name of this dataset in the file (e.g. ``/grp/MyDataset``)
+ .. autoattribute:: h5py.Dataset.chunks
+ .. autoattribute:: h5py.Dataset.maxshape
+ .. autoattribute:: h5py.Dataset.compression
+ .. autoattribute:: h5py.Dataset.compression_opts
+ .. autoattribute:: h5py.Dataset.shuffle
+ .. autoattribute:: h5py.Dataset.fletcher32
- .. attribute:: attrs
+ .. autoattribute:: h5py.Dataset.regionref
- Provides access to HDF5 attributes; see :ref:`attributes`.
+ **Dataset methods**
- .. attribute:: file
-
- The ``File`` instance used to open this HDF5 file.
+ .. automethod:: h5py.Dataset.__getitem__
+ .. automethod:: h5py.Dataset.__setitem__
- .. attribute:: parent
+ .. automethod:: h5py.Dataset.read_direct
+ .. automethod:: h5py.Dataset.write_direct
- A group which contains this object, according to dirname(obj.name).
+ .. automethod:: h5py.Dataset.resize
+ .. automethod:: h5py.Dataset.len
- .. attribute:: shape
- Numpy-style shape tuple with dataset dimensions
- .. attribute:: dtype
- Numpy dtype object representing the dataset type
- .. attribute:: chunks
- Dataset chunk size, or None if chunked layout isn't used.
- .. attribute:: compression
- None or a string indicating the compression strategy;
- one of "gzip", "lzf", or "lzf".
- .. attribute:: compression_opts
- Setting for the compression filter
- .. attribute:: shuffle
- Is the shuffle filter being used? (T/F)
- .. attribute:: fletcher32
- Is the fletcher32 filter (error detection) being used? (T/F)
- .. attribute:: maxshape
- Maximum allowed size of the dataset, as specified when it was created.
- .. method:: __getitem__(*args) -> NumPy ndarray
- Read a slice from the dataset. See :ref:`slicing_access`.
- .. method:: __setitem__(*args, val)
- Write to the dataset. See :ref:`slicing_access`.
- .. method:: read_direct(dest, source_sel=None, dest_sel=None)
- Read directly from HDF5 into an existing NumPy array. The "source_sel"
- and "dest_sel" arguments may be Selection instances (from the
- selections module) or the output of ``numpy.s_``. Standard broadcasting
- is supported.
-
- .. method:: write_direct(source, source_sel=None, dest_sel=None)
-
- Write directly to HDF5 from a NumPy array. The "source_sel"
- and "dest_sel" arguments may be Selection instances (from the
- selections module) or the output of ``numpy.s_``. Standard broadcasting
- is supported.
-
- .. method:: resize(shape, axis=None)
-
- Change the size of the dataset to this new shape. Must be compatible
- with the *maxshape* as specified when the dataset was created. If
- the keyword *axis* is provided, the argument should be a single
- integer instead; that axis only will be modified.
-
- **Only available with HDF5 1.8**
-
- .. method:: __len__
-
- The length of the first axis in the dataset (TypeError if scalar).
- This **does not work** on 32-bit platforms, if the axis in question
- is larger than 2^32. Use :meth:`len` instead.
-
- .. method:: len()
-
- The length of the first axis in the dataset (TypeError if scalar).
- Works on all platforms.
-
- .. method:: __iter__
-
- Iterate over rows (first axis) in the dataset. TypeError if scalar.
diff --git a/docs/source/guide/file.rst b/docs/source/guide/file.rst
index 6faf9cc..3d9a855 100644
--- a/docs/source/guide/file.rst
+++ b/docs/source/guide/file.rst
@@ -34,43 +34,9 @@ driver you want to use when the file is opened::
>>> f = h5py.File('myfile.hdf5', driver=<driver name>, <driver_kwds>)
For example, the HDF5 "core" driver can be used to create a purely in-memory
-HDF5 file, optionally written out to disk when it is closed. Currently
-supported drivers are:
+HDF5 file, optionally written out to disk when it is closed. See the File
+class documentation for an exhaustive list.
-None
- Use the standard HDF5 driver appropriate for the current platform.
- On UNIX, this is the H5FD_SEC2 driver; on Windows, it is
- H5FD_WINDOWS. This driver is almost always the best choice.
-
-'sec2'
- Optimized I/O using standard POSIX functions. Default on UNIX platforms.
-
-'stdio'
- I/O uses functions from stdio.h. This introduces an additional layer
- of buffering between the HDF5 library and the filesystem.
-
-'core'
- Creates a memory-resident file. With HDF5 1.8, you may specify an
- existing file on disk. When the file is closed, by default it is
- written back to disk with the given name. Keywords:
-
- backing_store
- If True (default), save changes to a real file
- when closing. If False, the file exists purely
- in memory and is discarded when closed.
-
- block_size
- Increment (in bytes) by which memory is extended.
- Default is 1 megabyte (1024**2).
-
-'family'
- Store the file on disk as a series of fixed-length chunks. Useful
- if the file system doesn't allow large files. Note: the filename
- you provide *must* contain a printf-style integer format code (e.g "%d"),
- which will be replaced by the file sequence number. Keywords:
-
- memb_size
- Maximum file size (default is 2**31-1).
Reference
---------
@@ -86,37 +52,25 @@ the full API of Group objects; in this case, the group in question is the
on disk. ``File.name`` gives the HDF5 name of the root group, "``/``". To
access the on-disk name, use ``File.filename``.
-.. class:: File
-
- Represents an HDF5 file on disk, and provides access to the root
- group (``/``).
-
- See also :class:`Group`, of which this is a subclass.
-
- .. attribute:: filename
-
- HDF5 filename on disk. This is a plain string (``str``) for ASCII
- names, ``unicode`` otherwise.
-
- .. attribute:: mode
-
- Mode (``r``, ``w``, etc) used to open file
-
- .. attribute:: driver
+.. autoclass:: h5py.File
- Driver ('sec2', 'stdio', etc.) used to open file
+ **File properties**
- .. method:: __init__(name, mode='a', driver=None, **driver_kwds)
-
- Open or create an HDF5 file. See above for a summary of options.
- Argument *name* may be an ASCII or Unicode string.
+ .. autoattribute:: h5py.File.filename
+ .. autoattribute:: h5py.File.mode
+ .. autoattribute:: h5py.File.driver
- .. method:: close()
+ **File methods**
- Close the file. As with Python files, it's good practice to call
- this when you're done.
+ .. automethod:: h5py.File.close
+ .. automethod:: h5py.File.flush
- .. method:: flush()
+ **Properties common to all HDF5 objects:**
- Ask the HDF5 library to flush its buffers for this file.
+ .. autoattribute:: h5py.File.file
+ .. autoattribute:: h5py.File.parent
+ .. autoattribute:: h5py.File.name
+ .. autoattribute:: h5py.File.id
+ .. autoattribute:: h5py.File.ref
+ .. autoattribute:: h5py.File.attrs
diff --git a/docs/source/guide/group.rst b/docs/source/guide/group.rst
index 22a408c..f871f7a 100644
--- a/docs/source/guide/group.rst
+++ b/docs/source/guide/group.rst
@@ -66,257 +66,111 @@ Group objects implement the following subset of the Python "mapping" interface:
- :meth:`__delitem__() <Group.__delitem__>`
- :meth:`get() <Group.get>`
-Reference
----------
-
-.. class:: Group
-
- .. attribute:: name
-
- Full name of this group in the file (e.g. ``/grp/thisgroup``)
-
- .. attribute:: attrs
-
- Dictionary-like object which provides access to this group's
- HDF5 attributes. See :ref:`attributes` for details.
-
- .. attribute:: file
-
- The ``File`` instance used to open this HDF5 file.
-
- .. attribute:: parent
-
- A group which contains this object, according to dirname(obj.name).
-
- .. method:: __getitem__(name) -> Group or Dataset
-
- Open an object in this group.
-
- .. method:: __setitem__(name, object)
-
- Add the given object to the group.
-
- The action taken depends on the type of object assigned:
-
- **Named HDF5 object** (Dataset, Group, Datatype)
- A hard link is created in this group which points to the
- given object.
-
- **Numpy ndarray**
- The array is converted to a dataset object, with default
- settings (contiguous storage, etc.). See :meth:`create_dataset`
- for a more flexible way to do this.
-
- **Numpy dtype**
- Commit a copy of the datatype as a named type in the file.
-
- **Anything else**
- Attempt to convert it to an ndarray and store it. Scalar
- values are stored as scalar datasets. Raise ValueError if we
- can't understand the resulting array dtype.
-
- If a group member of the same name already exists, the assignment
- will fail.
-
- .. method:: __delitem__(name)
-
- Remove (unlink) this member.
-
- .. method:: create_group(name) -> Group
-
- Create a new HDF5 group.
-
- Fails with ValueError if the group already exists.
-
- .. method:: require_group(name) -> Group
-
- Open the specified HDF5 group, creating it if it doesn't exist.
-
- Fails with TypeError if an incompatible object (dataset or named type)
- already exists.
-
- .. method:: create_dataset(name, [shape, [dtype]], [data], **kwds) -> Dataset
-
- Create a new dataset. There are two logical ways to specify the dataset:
+Soft links
+----------
- 1. Give the shape, and optionally the dtype. If the dtype is not given,
- single-precision floating point ('=f4') will be assumed.
- 2. Give a NumPy array (or anything that can be converted to a NumPy array)
- via the "data" argument. The shape and dtype of this array will be
- used, and the dataset will be initialized to its contents.
+Like a UNIX filesystem, HDF5 groups can contain "soft" or symbolic links,
+which contain a text path instead of a pointer to the object itself. You
+can easily create these in h5py:
- Additional keyword parameters control the details of how the dataset is
- stored.
+ >>> myfile = h5py.File('foo.hdf5','w')
+ >>> group = myfile.create_group("somegroup")
+ >>> myfile["alias"] = h5py.SoftLink('/somegroup')
- **shape** (None or tuple)
- NumPy-style shape tuple. Required if data is not given.
+Once created, soft links act just like regular links. You don't have to
+do anything special to access them:
- **dtype** (None or dtype)
- NumPy dtype (or anything that can be converted). Optional;
- the default is '=f4'. Will override the dtype of any data
- array given via the *data* parameter.
+ >>> print myfile["alias"]
+ <HDF5 group "/alias" (0 members)>
- **data** (None or ndarray)
- Either a NumPy ndarray or anything that can be converted to one.
+However, they "point to" the target:
- Keywords (see also Dataset :ref:`dsetfeatures`):
+ >>> myfile['alias'] == myfile['somegroup']
+ True
- **chunks** (None, True or shape tuple)
- Store the dataset in chunked format. Automatically
- selected if any of the other keyword options are given. If you
- don't provide a shape tuple, the library will guess one for you.
- Chunk sizes of 300kB and smaller work best with HDF5.
+If the target is removed, they will "dangle":
- **compression** (None, string ["gzip" | "lzf" | "szip"] or int 0-9)
- Enable dataset compression. DEFLATE, LZF and (where available)
- SZIP are supported. An integer is interpreted as a GZIP level.
+ >>> del myfile['somegroup']
+ >>> print myfile['alias']
+ KeyError: 'Component not found (Symbol table: Object not found)'
- **compression_opts** (None, or special value)
- Setting for compression filter; legal values for each filter
- type are:
+.. note::
- ====== ======================================
- "gzip" Integer 0-9
- "lzf" (none allowed)
- "szip" 2-tuple ('ec'|'nn', even integer 0-32)
- ====== ======================================
+ The class h5py.SoftLink doesn't actually do anything by itself; it only
+ serves as an indication to the Group object that you want to create a
+ soft link.
- See the ``filters`` module docstring for a more detailed
- description of these filters.
- **shuffle** (True/False)
- Enable/disable data shuffling, which can improve compression
- performance. Default is False.
+External links
+--------------
- **fletcher32** (True/False)
- Enable Fletcher32 error detection; may be used with or without
- compression. Default is False.
+New in HDF5 1.8, external links are "soft links plus", which allow you to
+specify the name of the file as well as the path to the desired object. You
+can refer to objects in any file you wish. Use similar syntax as for soft
+links:
- **maxshape** (None or shape tuple)
- Make the dataset extendable, up to this maximum shape. Should be a
- NumPy-style shape tuple. Dimensions with value None have no upper
- limit.
+ >>> myfile = h5py.File('foo.hdf5','w')
+ >>> myfile['ext link'] = h5py.ExternalLink("otherfile.hdf5", "/path/to/resource")
- .. method:: require_dataset(name, [shape, [dtype]], [data], **kwds) -> Dataset
+When the link is accessed, the file "otherfile.hdf5" is opened, and object at
+"/path/to/resource" is returned.
- Open a new dataset, creating one if it doesn't exist.
+.. note::
- This method operates exactly like :meth:`create_dataset`, except that if
- a dataset with compatible shape and dtype already exists, it is opened
- instead. The additional keyword arguments are only honored when actually
- creating a dataset; they are ignored for the comparison.
+ Since the object retrieved is in a different file, its ".file" and ".parent"
+ properties will refer to objects in that file, *not* the file in which the
+ link resides.
- If an existing incompatible object (Group or Datatype) already exists
- with the given name, fails with ValueError.
+Getting info on links
+---------------------
- .. method:: copy(source, dest, name=None)
+Although soft and external links are designed to be transparent, there are some
+cases where it is valuable to know when they are in use. The Group method
+"get" takes keyword arguments which let you choose whether to follow a link or
+not, and to return the class of link in use (soft or external).
- **Only available with HDF5 1.8**
-
- Recusively copy an object from one location to another, or between files.
-
- Copies the given object, and (if it is a group) all objects below it in
- the hierarchy. The destination need not be in the same file.
-
- **source** (Group, Dataset, Datatype or str)
- Source object or path.
-
- **dest** (Group or str)
- Destination. Must be either Group or path. If a Group object, it may
- be in a different file.
-
- **name** (None or str)
- If the destination is a Group object, you can override the name
- for the newly created member. Otherwise a new name will be chosen
- using basename(source.name).
-
- .. method:: visit(func) -> None or return value from func
-
- **Only available with HDF5 1.8**
-
- Recursively iterate a callable over objects in this group.
-
- You supply a callable (function, method or callable object); it
- will be called exactly once for each link in this group and every
- group below it. Your callable must conform to the signature::
-
- func(<member name>) -> <None or return value>
-
- Returning None continues iteration, returning anything else stops
- and immediately returns that value from the visit method. No
- particular order of iteration within groups is guranteed.
-
- Example::
-
- >>> # List the entire contents of the file
- >>> f = File("foo.hdf5")
- >>> list_of_names = []
- >>> f.visit(list_of_names.append)
-
- .. method:: visititems(func) -> None or return value from func
-
- **Only available with HDF5 1.8**
-
- Recursively visit names and objects in this group and subgroups.
-
- You supply a callable (function, method or callable object); it
- will be called exactly once for each link in this group and every
- group below it. Your callable must conform to the signature::
-
- func(<member name>, <object>) -> <None or return value>
-
- Returning None continues iteration, returning anything else stops
- and immediately returns that value from the visit method. No
- particular order of iteration within groups is guranteed.
-
- Example::
-
- # Get a list of all datasets in the file
- >>> mylist = []
- >>> def func(name, obj):
- ... if isinstance(obj, Dataset):
- ... mylist.append(name)
- ...
- >>> f = File('foo.hdf5')
- >>> f.visititems(func)
-
- .. method:: __len__
-
- Number of group members
-
- .. method:: __iter__
-
- Yields the names of group members
-
- .. method:: __contains__(name)
+Reference
+---------
- See if the given name is in this group.
+.. autoclass:: h5py.Group
+
+ **``Group`` methods**
- .. method:: keys
+ .. automethod:: h5py.Group.__setitem__
+ .. automethod:: h5py.Group.__getitem__
- Get a list of member names
+ .. automethod:: h5py.Group.create_group
+ .. automethod:: h5py.Group.create_dataset
- .. method:: iterkeys
+ .. automethod:: h5py.Group.require_group
+ .. automethod:: h5py.Group.require_dataset
- Get an iterator over member names. Equivalent to iter(group).
+ .. automethod:: h5py.Group.copy
+ .. automethod:: h5py.Group.visit
+ .. automethod:: h5py.Group.visititems
- .. method:: values
+ **Dictionary-like methods**
- Get a list with all objects in this group.
+ .. automethod:: h5py.Group.keys
+ .. automethod:: h5py.Group.values
+ .. automethod:: h5py.Group.items
- .. method:: itervalues
+ .. automethod:: h5py.Group.iterkeys
+ .. automethod:: h5py.Group.itervalues
+ .. automethod:: h5py.Group.iteritems
- Get an iterator over objects in this group
+ .. automethod:: h5py.Group.get
- .. method:: items
+ **Properties common to all HDF5 objects:**
- Get an list of (name, object) pairs for the members of this group.
+ .. autoattribute:: h5py.Group.file
+ .. autoattribute:: h5py.Group.parent
+ .. autoattribute:: h5py.Group.name
+ .. autoattribute:: h5py.Group.id
+ .. autoattribute:: h5py.Group.ref
+ .. autoattribute:: h5py.Group.attrs
- .. method:: iteritems
- Get an iterator over (name, object) pairs for the members of this group.
- .. method:: get(name, default)
- Retrieve the member, or *default* if it doesn't exist.
diff --git a/docs/source/guide/hl.rst b/docs/source/guide/hl.rst
index c3eba91..2ed2310 100644
--- a/docs/source/guide/hl.rst
+++ b/docs/source/guide/hl.rst
@@ -19,6 +19,7 @@ separately below.
dataset
links
attr
+ refs
vl
other
diff --git a/docs/source/guide/refs.rst b/docs/source/guide/refs.rst
new file mode 100644
index 0000000..752f761
--- /dev/null
+++ b/docs/source/guide/refs.rst
@@ -0,0 +1,126 @@
+==========
+References
+==========
+
+In addition to soft and external links, HDF5 supplies one more mechanism to
+refer to objects and data in a file. HDF5 *references* are low-level pointers
+to other objects. The great advantage of references is that they can be
+stored and retrieved as data; you can create an attribute or an entire dataset
+of reference type.
+
+References come in two flavors, object references and region references.
+As the name suggests, object references point to a particular object in a file,
+either a dataset, group or named datatype. Region references always point to
+a dataset, and additionally contain information about a certain selection
+(*dataset region*) on that dataset. For example, if you have a dataset
+representing an image, you could specify a region of interest, and store it
+as an attribute on the dataset.
+
+Using object references
+-----------------------
+
+It's trivial to create a new object reference; every high-level object
+in h5py has a read-only property "ref", which when accessed returns a new
+object reference:
+
+ >>> myfile = h5py.File('myfile.hdf5')
+ >>> mygroup = myfile['/some/group']
+ >>> ref = mygroup.ref
+ >>> print ref
+ <HDF5 object reference>
+
+"Dereferencing" these objects is straightforward; use the same syntax as when
+opening any other object:
+
+ >>> mygroup2 = myfile[ref]
+ >>> print mygroup2
+ <HDF5 group "/some/group" (0 members)>
+
+You don't have to use a File object to do this. As when using absolute paths,
+any Group object in the same file will do.
+
+Using region references
+-----------------------
+
+Region references always contain a selection. You create them using the
+dataset property "regionref" and standard NumPy slicing syntax:
+
+ >>> myds = myfile.create_dataset('dset', (200,200))
+ >>> regref = myds.regionref[0:10, 0:5]
+ >>> print regref
+ <HDF5 region reference>
+
+The reference itself can now be used in place of slicing arguments to the
+dataset:
+
+ >>> subset = myds[regref]
+
+There is one complication; since HDF5 region references don't express shapes
+the same way as NumPy does, the data returned will be "flattened" into a
+1-D array:
+
+ >>> subset.shape
+ (50,)
+
+This is similar to the behavior of NumPy's fancy indexing, which returns
+a 1D array for selections which don't conform to a regular grid.
+
+In addition to storing a selection, region references inherit from object
+references, and can be used anywhere an object reference is accepted. In this
+case the object they point to is the dataset used to create them.
+
+Storing references in a dataset
+-------------------------------
+
+HDF5 treats object and region references as data. Consequently, there is a
+special HDF5 type to represent them. However, NumPy has no equivalent type.
+Rather than implement a special "reference type" for NumPy, references are
+handled at the Python layer as plain, ordinary python objects. To NumPy they
+are represented with the "object" dtype (kind 'O'). A small amount of
+metadata attached to the dtype tells h5py to interpret the data as containing
+reference objects.
+
+H5py contains a convenience function to create these "hinted dtypes" for you:
+
+ >>> ref_dtype = h5py.special_dtype(ref=h5py.Reference)
+ >>> type(ref_dtype)
+ <type 'numpy.dtype'>
+ >>> ref_dtype.kind
+ 'O'
+
+The types accepted by this "ref=" keyword argument are h5py.Reference (for
+object references) and h5py.RegionReference (for region references).
+
+To create an array of references, use this dtype as you normally would:
+
+ >>> ref_dataset = myfile.create_dataset("MyRefs", (100,), dtype=ref_dtype)
+
+You can read from and write to the array as normal:
+
+ >>> ref_dataset[0] = myfile.ref
+ >>> print ref_dataset[0]
+ <HDF5 object reference>
+
+Storing references in an attribute
+----------------------------------
+
+Simply assign the reference to a name; h5py will figure it out and store it
+with the correct type:
+
+ >>> myref = myfile.ref
+ >>> myfile.attrs["Root group reference"] = myref
+
+Null references
+---------------
+
+When you create a dataset of reference type, the uninitialized elements are
+"null" references. H5py uses the truth value of a reference object to
+indicate whether or not it is null:
+
+ >>> print bool(myfile.ref)
+ True
+ >>> nullref = ref_dataset[50]
+ >>> print bool(nullref)
+ False
+
+
--
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/h5py.git
More information about the debian-science-commits
mailing list