[h5py] 216/455: Doc updates

Ghislain Vaillant ghisvail-guest at moszumanska.debian.org
Thu Jul 2 18:19:34 UTC 2015


This is an automated email from the git hooks/post-receive script.

ghisvail-guest pushed a commit to annotated tag 1.3.0
in repository h5py.

commit 5d83f89655c23981cfd0e4b0f751697f2dd5434c
Author: andrewcollette <andrew.collette at gmail.com>
Date:   Tue Feb 3 00:33:55 2009 +0000

    Doc updates
---
 docs/source/guide/build.rst | 118 ++++++++++++++++++++++++++------------------
 docs/source/guide/hl.rst    | 103 ++++++++++++++++++++++----------------
 docs/source/guide/quick.rst |  75 ++++++++--------------------
 setup.py                    |  39 ++++++++++++++-
 4 files changed, 190 insertions(+), 145 deletions(-)

diff --git a/docs/source/guide/build.rst b/docs/source/guide/build.rst
index c6d7b17..6e557e1 100644
--- a/docs/source/guide/build.rst
+++ b/docs/source/guide/build.rst
@@ -7,25 +7,24 @@ Installation guide
 Where to get h5py
 =================
 
-See the download page at `the Google Code site`__.  If installing on Windows,
-be sure to get a version that matches your Python version (2.5 or 2.6).
-
-__ http://h5py.googlecode.com
+Downloads for all platforms are available at http://h5py.googlecode.com.
+Tar files are available for UNIX-like systems (Linux and Mac OS-X), and
+a binary installer for Windows which includes HDF5 1.8.  As of version 1.1,
+h5py can also be installed via easy_install.
 
 Getting HDF5
 ============
 
-On :ref:`Windows <windows>`, HDF5 is provided as part of the integrated
-installer for h5py.  On UNIX platforms (:ref:`Linux and OS-X <linux>`), you
-must provide HDF5 yourself.  The following HDF5 versions are supported:
+On Windows, HDF5 is provided as part of the integrated
+installer for h5py.  On Linux and OS-X, you
+must provide HDF5 yourself.  HDF5 versions **1.6.5** through **1.8.2** are
+supported.
 
-* 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.8.0, 1.8.1, 1.8.2
+**The best solution for both Linux and OS-X is to install HDF5 via a
+package manager.** If you decide to build HDF5 from source, be sure to
+build it as a dynamic library.
 
-**The best solution is to install HDF5 via a package manager.** If you must
-install yourself from source, keep in mind that you *must* build as a dynamic
-library.
-
-`The HDF Group`__ provides several "dumb" (untar in "/") binary distributions
+`The HDF Group`__ provides several "dumb" (untar in **/**) binary distributions
 for Linux, but traditionally only static libraries for Mac.  Mac OS-X users
 should use something like Fink, or compile HDF5 from source.
 
@@ -37,15 +36,17 @@ __ http://www.hdfgroup.com/HDF5
 Installing on Windows
 =====================
 
+Requires
+--------
+
+- Python 2.5
+- NumPy_ 1.0.3 or higher
+
 Download the executable installer from `Google Code`__ and run it.  This
 installs h5py and a private copy of HDF5 1.8.
 
 __ http://h5py.googlecode.com
 
-Requires
---------
-
-- NumPy_ 1.0.3 or higher
 
 .. _linux:
 
@@ -53,51 +54,72 @@ Installing on Linux/Mac OS-X
 ============================
 
 This package is designed to be installed from source.  You will need
-Python and a C compiler, for distutils to build the extensions.  Cython_ is
-required only if you want to change compile-time options, like the
-debugging level.
-
+Python and a C compiler, for setuptools to build the extensions.
 
 Requires
 --------
-- Python with headers for development
+- Python 2.5 or 2.6, including headers ("python-dev")
 - Numpy_ 1.0.3 or higher
 - HDF5_ 1.6.5 or higher, including 1.8.X versions
-- Cython_ 0.9.8.1.1 or higher
-
-- Unix-like environment (created/tested on 32-bit Intel linux)
-- A working compiler for distutils
 
 .. _Numpy: http://numpy.scipy.org/
 .. _HDF5: http://www.hdfgroup.com/HDF5
-.. _Cython: http://cython.org/
-
-Procedure
----------
-1.  Unpack the tarball and cd to the resulting directory
-2.  Run ``python setup.py build`` to build the package
-3.  Run ``python setup.py test`` to run unit tests in-place (optional)
-4.  Run ``sudo python setup.py install`` to install into your main Python
-    package directory.
-5.  ``cd`` out of the installation directory before importing h5py, to prevent
-    Python from trying to run from the source folder.
-
-Additional options
+
+
+Quick installation
 ------------------
 
-::
+H5py can now be automatically installed by setuptools' easy_install command::
+
+    $ [sudo] easy_install h5py
+
+Alternatively, you can install in the traditional manner by running setup.py::
+
+    $ python setup.py build
+    $ [sudo] python setup.py install
+
+
+Custom installation
+-------------------
+
+Sometimes h5py may not be able to determine what version of HDF5 is installed.
+Also, sometimes HDF5 may be installed in an unusual location.  You can
+specify both your version of HDF5 and its location through the ``configure``
+command::
+
+    $ python setup.py configure [--hdf5=/path/to/hdf5] [--api=<16 or 18>]
+    $ python setup.py build
+    $ [sudo] python setup.py install
+
+Alternatively (for example, if installing with easy_install), you can use
+environment variables::
+
+    $ export HDF5_DIR=/path/to/hdf5
+    $ export HDF5_API=<16 or 18>
+    $ easy_install h5py
+
+Keep in mind that on some platforms, ``sudo`` will filter out your environment
+variables.  If you need to be a superuser to run easy_install, you might
+want to issue all three of these commands in a root shell.
+
+Settings issued with the ``configure`` command will always override those set
+with environment variables.  Also, for technical reasons the configure command
+must be run by itself, before any build commands.
+
+The standard command::
+
+    $ python setup.py clean
 
- --hdf5=<path>   Path to your HDF5 installation (if not in one of the standard
-                 places.  Must contain bin/ and lib/ directories.
+will clean up all temporary files, including the output of ``configure``.
 
- --cython        Force Cython to run
+Problems
+========
 
- --cython-only   Run Cython, and stop before compiling with GCC.
- 
- --api=<16|18>   Force either 1.6 or 1.8 API compatibility level.  Use if h5py
-                 does not correctly identify your installed HDF5 version.
+If you have trouble installing or using h5py, first read the FAQ at
+http://h5py.googlecode.com for common issues.  You are also welcome to
+open a new bug there, or email me directly at "h5py at alfven dot org".
+Enjoy!
 
- --diag=<int>    Compile in diagnostic (debug) mode.
 
 
 
diff --git a/docs/source/guide/hl.rst b/docs/source/guide/hl.rst
index b89488b..bc861c0 100644
--- a/docs/source/guide/hl.rst
+++ b/docs/source/guide/hl.rst
@@ -64,6 +64,11 @@ function ``h5py.get_config()``.  This object supports the following attributes:
         Set to a 2-tuple of strings (real, imag) to control how complex numbers
         are saved.  The default is ('r','i').
 
+    **bool_names**
+        Booleans are saved as HDF5 enums.  Set this to a 2-tuple of strings
+        (false, true) to control the names used in the enum.  The default
+        is ("FALSE", "TRUE").
+
 Threading
 ---------
 
@@ -90,6 +95,7 @@ small, named bits of data.  :class:`Group`, :class:`Dataset` and even
 behavior, named ``<obj>.attrs``.  This is the correct way to store metadata
 in HDF5 files.
 
+.. _hlfile:
 
 File Objects
 ============
@@ -285,16 +291,30 @@ Reference
             selected if any of the other keyword options are given.  If you
             don't provide a shape tuple, the library will guess one for you.
 
-        **compression** (None or int[0-9])
-            Enable dataset compression.  Currently only gzip (DEFLATE)
-            compression is supported, at the given level.
+        **compression** (None, string ["gzip" | "lzf" | "szip"] or int 0-9)
+            Enable dataset compression.  DEFLATE, LZF and (where available)
+            SZIP are supported.  An integer is interpreted as a GZIP level
+            for backwards compatibility
+
+        **compression_opts** (None, or special value)
+            Setting for compression filter; legal values for each filter
+            type are:
+
+            ======      ======================================
+            "gzip"      Integer 0-9
+            "lzf"       (none allowed)
+            "szip"      2-tuple ('ec'|'nn', even integer 0-32)
+            ======      ======================================
+
+            See the ``filters`` module for a detailed description of each
+            of these filters.
 
         **shuffle** (True/False)
-            Enable the shuffle filte.  When used in conjunction with the
-            *compression* keyword, can increase the compression ratio.
+            Enable/disable data shuffling, which can improve compression
+            performance.  Automatically enabled when compression is used.
 
         **fletcher32** (True/False)
-            Enable Fletcher32 error detection; may be used in addition to
+            Enable Fletcher32 error detection; may be used with or without
             compression.
 
         **maxshape** (None or shape tuple)
@@ -462,14 +482,10 @@ features.  These are enabled by the keywords provided to
 :meth:`Group.create_dataset`.  Some of the more useful are:
 
 Compression
-    Transparent compression 
-    (keyword *compression*)
-    can substantially reduce the storage space
-    needed for the dataset.  The default compression method is GZIP (DEFLATE),
-    which is universally supported by other installations of HDF5.
-    Supply an integer between 0 and 9 to enable GZIP compression at that level.
-    Using the *shuffle* filter along with this option can improve the
-    compression ratio further.
+    Transparent compression (keyword *compression*) can substantially reduce
+    the storage space needed for the dataset.  Beginning with h5py 1.1,
+    three techniques are available, "gzip", "lzf" and "szip".  See the
+    ``filters`` module for more information.
 
 Error-Detection
     All versions of HDF5 include the *fletcher32* checksum filter, which enables
@@ -507,8 +523,9 @@ Resizing
     .. note::
         Only datasets stored in "chunked" format can be resized.  This format
         is automatically selected when any of the advanced storage options is
-        used, or a *maxshape* tuple is provided.  You can also force it to be
-        used by specifying ``chunks=True`` at creation time.
+        used, or a *maxshape* tuple is provided.  By default an appropriate
+        chunk size is selected based on the shape and type of the dataset; you
+        can also manually specify a chunk shape via the ``chunks`` keyword.
 
 .. _slicing_access:
 
@@ -570,8 +587,8 @@ The following restrictions exist:
 Sparse selection
 ----------------
 
-Two mechanisms exist for the case of scattered and/or sparse selection, for
-which slab or row-based techniques may not be appropriate.
+Additional mechanisms exist for the case of scattered and/or sparse selection,
+for which slab or row-based techniques may not be appropriate.
 
 Boolean "mask" arrays can be used to specify a selection.  The result of
 this operation is a 1-D array with elements arranged in the standard NumPy
@@ -583,30 +600,13 @@ this operation is a 1-D array with elements arranged in the standard NumPy
     >>> result.shape
     (49,)
 
-If you have a set of discrete points you want to access, you may not want to go
-through the overhead of creating a boolean mask.  This is especially the case
-for large datasets, where even a byte-valued mask may not fit in memory.  You
-can pass a sequence object containing points to the dataset selector via a
-custom "CoordsList" instance:
-
-    >>> mycoords = [ (0,0), (3,4), (7,8), (3,5), (4,5) ]
-    >>> coords_list = CoordsList(mycoords)
-    >>> result = dset[coords_list]
-    >>> result.shape
-    (5,)
-
-Like boolean-array indexing, the result is a 1-D array.  The order in which
-points are selected is preserved.  Duplicate points are ignored.
-
-.. note::
-    Boolean-mask and CoordsList indexing rely on an HDF5 construct which
-    explicitly enumerates the points to be selected.  It's very flexible but
-    most appropriate for 
-    reasonably-sized (or sparse) selections.  The coordinate list takes at
-    least 8*<rank> bytes per point, and may need to be internally copied.  For
-    example, it takes 40MB to express a 1-million point selection on a rank-3
-    array.  Be careful, especially with boolean masks.
+Advanced selection
+------------------
 
+The ``selections`` module contains additional classes which provide access to
+the full range of HDF5 dataspace selection techniques, including point-based
+selection and selection via overlapping hyperslabs.  These are especially
+useful for read_direct and write_direct.
 
 Value attribute and scalar datasets
 -----------------------------------
@@ -673,7 +673,12 @@ Reference
 
     .. attribute:: compression
 
-        GZIP compression level, or None if compression isn't used.
+        None or a string indicating the compression strategy;
+        one of "gzip", "lzf", or "lzf".
+
+    .. attribute:: compression_opts
+
+        Setting for the compression filter
 
     .. attribute:: shuffle
 
@@ -695,6 +700,20 @@ Reference
 
         Write to the dataset.  See :ref:`slicing_access`.
 
+    .. method:: read_direct(dest, source_sel=None, dest_sel=None)
+
+        Read directly from HDF5 into an existing NumPy array.  The "source_sel"
+        and "dest_sel" arguments may be Selection instances (from the
+        selections module) or the output of "numpy.s_".  Standard broadcasting
+        is supported.
+
+    .. method:: write_direct(source, source_sel=None, dest_sel=None)
+
+        Write directly to HDF5 from a NumPy array.  The "source_sel"
+        and "dest_sel" arguments may be Selection instances (from the
+        selections module) or the output of "numpy.s_".  Standard broadcasting
+        is supported.
+
     .. method:: resize(shape, axis=None)
 
         Change the size of the dataset to this new shape.  Must be compatible
diff --git a/docs/source/guide/quick.rst b/docs/source/guide/quick.rst
index bf7836c..56ecf76 100644
--- a/docs/source/guide/quick.rst
+++ b/docs/source/guide/quick.rst
@@ -55,73 +55,46 @@ Getting data into HDF5
 
 First, install h5py by following the :ref:`installation instructions <build>`.
 
-The ``import *`` construct is safe when used with the main package::
+Since an example is worth a thousand words, here's how to create a new file,
+create a dataset, and store some data::
 
-    >>> from h5py import *
+    import numpy as np
+    import h5py
 
-The rest of the examples here assume you've done this.  Among other things, it
-imports the three classes ``File``, ``Group`` and ``Dataset``, which will cover
-99% of your needs.
+    mydata = np.arange(10).reshape((5,2))
 
-Create a new file
------------------
+    f = h5py.File('myfile.hdf5', 'w')
 
-Files are opened using a Python-file-like syntax::
+    dset = f.create_dataset("MyDataset", (10, 2), 'i')
 
-    >>> f = File("myfile.hdf5", 'w')    # Create/truncate file
-    >>> f
-    <HDF5 file "myfile.hdf5" (mode w, 0 root members)>
+    dset[0:5,:] = mydata
 
-In the filesystem metaphor of HDF5, the file object does double duty as the
-*root group* (named "/" like its POSIX counterpart).  You can store datasets
-in it directly, or create subgroups to keep your data better organized.
 
-Create a dataset
-----------------
-
-Datasets are like Numpy arrays which reside on disk; you create them by
-providing at least a name and a shape.  Here's an example::
-
-    >>> dset = f.create_dataset("MyDataset", (2,3), '=i4')  # dtype is optional
-    >>> dset
-    <HDF5 dataset "MyDataset": shape (2, 3), type "<i4">
-
-This creates a new 2-d 6-element (2x3) dataset containing 32-bit signed integer
-data, in native byte order, located in the root group at "/MyDataset".
+The `File <hlfile>`_ constructor accepts modes similar to Python file modes,
+including "r", "w", and "a" (the default).
 
-Some familiar NumPy attributes are included::
+The dataset object ``dset`` here represents a new 2-d HDF5 dataset.  Some
+features will be familiar to NumPy users::
 
     >>> dset.shape
-    (2, 3)
+    (10, 2)
     >>> dset.dtype
     dtype('int32')
 
-This dataset, like every object in an HDF5 file, has a name::
-
-    >>> dset.name
-    '/MyDataset'
-
 If you already have a NumPy array you want to store, just hand it off to h5py::
 
-    >>> arr = numpy.ones((2,3), '=i4')
-    >>> dset = f.create_dataset('MyDataset', data=arr)
+    arr = numpy.ones((2,3), '=i4')
+    dset = f.create_dataset('AnotherDataset', data=arr)
 
-Read & write data
------------------
+Additional features like transparent compression are also available::
 
-You can now store data in it using Numpy-like slicing syntax::
+    dset2 = f.create_dataset("CompressedDatset", data=arr, compression='lzf')
 
-    >>> print dset[...]
-    [[0 0 0]
-     [0 0 0]]
-    >>> import numpy
-    >>> myarr = numpy.ones((2,), '=i2')  # The dtype doesn't have to exactly match
-    >>> dset[:,0] = myarr
-    >>> print dset[...]
-    [[1 0 0]
-     [1 0 0]]
+Getting your data back
+----------------------
 
-The following slice mechanisms are supported (see :ref:`datasets` for more):
+You can store and retrieve data using Numpy-like slicing syntax.  The following
+slice mechanisms are supported (see :ref:`datasets` for more info):
 
     * Integers/slices (``array[2:11:3]``, etc)
     * Ellipsis indexing (``array[2,...,4:7]``)
@@ -140,13 +113,7 @@ Closing the file
 You don't need to do anything special to "close" datasets.  However, as with
 Python files you should close the file before exiting::
 
-    >>> dset
-    <HDF5 dataset "MyDataset": shape (2, 3), type "<i4">
     >>> f.close()
-    >>> f
-    <Closed HDF5 file>
-    >>> dset
-    <Closed HDF5 dataset>
 
 H5py tries to close all objects on exit (or when they are no longer referenced),
 but it's good practice to close your files anyway.
diff --git a/setup.py b/setup.py
index a521619..7fa751c 100644
--- a/setup.py
+++ b/setup.py
@@ -455,8 +455,45 @@ class cleaner(clean):
                 debug("Cleaning up %s" % path)
         clean.run(self)
 
+class doc(Command):
+
+    """ Regenerate documentation.  Unix only, requires epydoc/sphinx. """
+
+
+    description = "Rebuild documentation"
+
+    user_options = [('rebuild', 'r', "Rebuild from scratch")]
+    boolean_options = ['rebuild']
+
+    def initialize_options(self):
+        self.rebuild = False
+
+    def finalize_options(self):
+        pass
+
+    def run(self):
+
+
+        buildobj = self.distribution.get_command_obj('build')
+        buildobj.run()
+        pth = op.abspath(buildobj.build_lib)
+
+        if self.rebuild and op.exists('docs/build'):
+            shutil.rmtree('docs/build')
+
+        cmd = "export H5PY_PATH=%s; cd docs; make html" % pth
+
+        retval = os.system(cmd)
+        if retval != 0:
+            fatal("Can't build documentation")
+
+        if op.exists('docs/html'):
+            shutil.rmtree('docs/html')
+
+        shutil.copytree('docs/build/html', 'docs/html')
+
 CMD_CLASS = {'cython': cython, 'build_ext': hbuild_ext, 'clean': cleaner,
-             'configure': configure}
+             'configure': configure, 'doc': doc}
 
 # --- Setup parameters --------------------------------------------------------
 

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-science/packages/h5py.git



More information about the debian-science-commits mailing list